00:00:00.001 Started by upstream project "autotest-per-patch" build number 132290 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.077 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.078 The recommended git tool is: git 00:00:00.078 using credential 00000000-0000-0000-0000-000000000002 00:00:00.079 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.147 Fetching changes from the remote Git repository 00:00:00.150 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.209 Using shallow fetch with depth 1 00:00:00.209 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.209 > git --version # timeout=10 00:00:00.283 > git --version # 'git version 2.39.2' 00:00:00.283 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.346 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.346 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.454 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.466 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.477 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.478 > git config core.sparsecheckout # timeout=10 00:00:05.488 > git read-tree -mu HEAD # timeout=10 00:00:05.503 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.521 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.521 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.610 [Pipeline] Start of Pipeline 00:00:05.620 [Pipeline] library 00:00:05.621 Loading library shm_lib@master 00:00:05.621 Library shm_lib@master is cached. Copying from home. 00:00:05.634 [Pipeline] node 00:00:05.643 Running on WFP9 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:05.644 [Pipeline] { 00:00:05.653 [Pipeline] catchError 00:00:05.654 [Pipeline] { 00:00:05.665 [Pipeline] wrap 00:00:05.673 [Pipeline] { 00:00:05.680 [Pipeline] stage 00:00:05.682 [Pipeline] { (Prologue) 00:00:05.913 [Pipeline] sh 00:00:06.207 + logger -p user.info -t JENKINS-CI 00:00:06.224 [Pipeline] echo 00:00:06.225 Node: WFP9 00:00:06.234 [Pipeline] sh 00:00:06.528 [Pipeline] setCustomBuildProperty 00:00:06.540 [Pipeline] echo 00:00:06.542 Cleanup processes 00:00:06.547 [Pipeline] sh 00:00:06.829 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.829 1188854 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.841 [Pipeline] sh 00:00:07.123 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.123 ++ grep -v 'sudo pgrep' 00:00:07.123 ++ awk '{print $1}' 00:00:07.123 + sudo kill -9 00:00:07.123 + true 00:00:07.138 [Pipeline] cleanWs 00:00:07.149 [WS-CLEANUP] Deleting project workspace... 00:00:07.149 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.155 [WS-CLEANUP] done 00:00:07.160 [Pipeline] setCustomBuildProperty 00:00:07.175 [Pipeline] sh 00:00:07.456 + sudo git config --global --replace-all safe.directory '*' 00:00:07.559 [Pipeline] httpRequest 00:00:07.947 [Pipeline] echo 00:00:07.955 Sorcerer 10.211.164.20 is alive 00:00:07.991 [Pipeline] retry 00:00:07.995 [Pipeline] { 00:00:08.012 [Pipeline] httpRequest 00:00:08.016 HttpMethod: GET 00:00:08.016 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.017 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.027 Response Code: HTTP/1.1 200 OK 00:00:08.028 Success: Status code 200 is in the accepted range: 200,404 00:00:08.028 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:15.378 [Pipeline] } 00:00:15.395 [Pipeline] // retry 00:00:15.403 [Pipeline] sh 00:00:15.684 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:15.702 [Pipeline] httpRequest 00:00:16.173 [Pipeline] echo 00:00:16.174 Sorcerer 10.211.164.20 is alive 00:00:16.182 [Pipeline] retry 00:00:16.184 [Pipeline] { 00:00:16.200 [Pipeline] httpRequest 00:00:16.205 HttpMethod: GET 00:00:16.206 URL: http://10.211.164.20/packages/spdk_30279d1cf62478704082934ffb127fcd024b733f.tar.gz 00:00:16.206 Sending request to url: http://10.211.164.20/packages/spdk_30279d1cf62478704082934ffb127fcd024b733f.tar.gz 00:00:16.212 Response Code: HTTP/1.1 200 OK 00:00:16.212 Success: Status code 200 is in the accepted range: 200,404 00:00:16.213 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_30279d1cf62478704082934ffb127fcd024b733f.tar.gz 00:03:09.403 [Pipeline] } 00:03:09.422 [Pipeline] // retry 00:03:09.430 [Pipeline] sh 00:03:09.711 + tar --no-same-owner -xf spdk_30279d1cf62478704082934ffb127fcd024b733f.tar.gz 00:03:12.255 [Pipeline] sh 00:03:12.545 + git -C spdk log --oneline -n5 00:03:12.545 30279d1cf bdev: Add spdk_bdev_io_has_no_metadata() for bdev modules 00:03:12.545 4bd31eb0a bdev/malloc: Extract internal of verify_pi() for code reuse 00:03:12.545 2093c51b3 bdev/malloc: malloc_done() uses switch-case for clean up 00:03:12.545 8c4dec1aa nvmf: Add no_metadata option to nvmf_subsystem_add_ns 00:03:12.545 e029afccb nvmf: Get metadata config by not bdev but bdev_desc 00:03:12.555 [Pipeline] } 00:03:12.568 [Pipeline] // stage 00:03:12.577 [Pipeline] stage 00:03:12.579 [Pipeline] { (Prepare) 00:03:12.596 [Pipeline] writeFile 00:03:12.611 [Pipeline] sh 00:03:12.893 + logger -p user.info -t JENKINS-CI 00:03:12.905 [Pipeline] sh 00:03:13.185 + logger -p user.info -t JENKINS-CI 00:03:13.197 [Pipeline] sh 00:03:13.477 + cat autorun-spdk.conf 00:03:13.477 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:13.477 SPDK_TEST_NVMF=1 00:03:13.477 SPDK_TEST_NVME_CLI=1 00:03:13.477 SPDK_TEST_NVMF_NICS=mlx5 00:03:13.477 SPDK_RUN_UBSAN=1 00:03:13.477 NET_TYPE=phy 00:03:13.484 RUN_NIGHTLY=0 00:03:13.487 [Pipeline] readFile 00:03:13.504 [Pipeline] withEnv 00:03:13.505 [Pipeline] { 00:03:13.515 [Pipeline] sh 00:03:13.795 + set -ex 00:03:13.795 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:03:13.795 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:03:13.795 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:13.795 ++ SPDK_TEST_NVMF=1 00:03:13.795 ++ SPDK_TEST_NVME_CLI=1 00:03:13.795 ++ SPDK_TEST_NVMF_NICS=mlx5 00:03:13.795 ++ SPDK_RUN_UBSAN=1 00:03:13.795 ++ NET_TYPE=phy 00:03:13.795 ++ RUN_NIGHTLY=0 00:03:13.795 + case $SPDK_TEST_NVMF_NICS in 00:03:13.795 + DRIVERS=mlx5_ib 00:03:13.795 + [[ -n mlx5_ib ]] 00:03:13.795 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:13.795 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:17.080 rmmod: ERROR: Module irdma is not currently loaded 00:03:17.080 rmmod: ERROR: Module i40iw is not currently loaded 00:03:17.080 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:17.080 + true 00:03:17.080 + for D in $DRIVERS 00:03:17.080 + sudo modprobe mlx5_ib 00:03:17.080 + exit 0 00:03:17.089 [Pipeline] } 00:03:17.106 [Pipeline] // withEnv 00:03:17.111 [Pipeline] } 00:03:17.127 [Pipeline] // stage 00:03:17.137 [Pipeline] catchError 00:03:17.138 [Pipeline] { 00:03:17.154 [Pipeline] timeout 00:03:17.154 Timeout set to expire in 1 hr 0 min 00:03:17.157 [Pipeline] { 00:03:17.173 [Pipeline] stage 00:03:17.175 [Pipeline] { (Tests) 00:03:17.190 [Pipeline] sh 00:03:17.472 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:03:17.472 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:03:17.472 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:03:17.472 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:03:17.472 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:17.472 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:03:17.472 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:03:17.472 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:03:17.472 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:03:17.472 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:03:17.472 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:03:17.472 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:03:17.472 + source /etc/os-release 00:03:17.472 ++ NAME='Fedora Linux' 00:03:17.472 ++ VERSION='39 (Cloud Edition)' 00:03:17.472 ++ ID=fedora 00:03:17.472 ++ VERSION_ID=39 00:03:17.472 ++ VERSION_CODENAME= 00:03:17.472 ++ PLATFORM_ID=platform:f39 00:03:17.472 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:17.472 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:17.472 ++ LOGO=fedora-logo-icon 00:03:17.472 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:17.472 ++ HOME_URL=https://fedoraproject.org/ 00:03:17.472 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:17.472 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:17.472 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:17.472 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:17.472 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:17.472 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:17.472 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:17.472 ++ SUPPORT_END=2024-11-12 00:03:17.472 ++ VARIANT='Cloud Edition' 00:03:17.472 ++ VARIANT_ID=cloud 00:03:17.472 + uname -a 00:03:17.472 Linux spdk-wfp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:17.472 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:20.004 Hugepages 00:03:20.004 node hugesize free / total 00:03:20.004 node0 1048576kB 0 / 0 00:03:20.004 node0 2048kB 0 / 0 00:03:20.004 node1 1048576kB 0 / 0 00:03:20.004 node1 2048kB 0 / 0 00:03:20.004 00:03:20.004 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:20.004 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:20.004 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:20.004 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:20.004 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:20.004 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:20.004 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:20.004 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:20.004 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:20.004 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:20.004 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:20.004 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:20.004 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:20.004 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:20.004 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:20.004 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:20.004 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:20.004 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:20.004 + rm -f /tmp/spdk-ld-path 00:03:20.004 + source autorun-spdk.conf 00:03:20.004 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:20.004 ++ SPDK_TEST_NVMF=1 00:03:20.004 ++ SPDK_TEST_NVME_CLI=1 00:03:20.004 ++ SPDK_TEST_NVMF_NICS=mlx5 00:03:20.004 ++ SPDK_RUN_UBSAN=1 00:03:20.004 ++ NET_TYPE=phy 00:03:20.004 ++ RUN_NIGHTLY=0 00:03:20.004 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:20.004 + [[ -n '' ]] 00:03:20.004 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:20.004 + for M in /var/spdk/build-*-manifest.txt 00:03:20.004 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:20.004 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:03:20.004 + for M in /var/spdk/build-*-manifest.txt 00:03:20.005 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:20.005 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:03:20.005 + for M in /var/spdk/build-*-manifest.txt 00:03:20.005 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:20.005 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:03:20.005 ++ uname 00:03:20.005 + [[ Linux == \L\i\n\u\x ]] 00:03:20.005 + sudo dmesg -T 00:03:20.005 + sudo dmesg --clear 00:03:20.005 + dmesg_pid=1190633 00:03:20.005 + [[ Fedora Linux == FreeBSD ]] 00:03:20.005 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:20.005 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:20.005 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:20.005 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:03:20.005 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:03:20.005 + [[ -x /usr/src/fio-static/fio ]] 00:03:20.005 + export FIO_BIN=/usr/src/fio-static/fio 00:03:20.005 + FIO_BIN=/usr/src/fio-static/fio 00:03:20.005 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:20.005 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:20.005 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:20.005 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:20.005 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:20.005 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:20.005 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:20.005 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:20.005 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:03:20.005 + sudo dmesg -Tw 00:03:20.005 10:45:08 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:03:20.005 10:45:08 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:03:20.005 10:45:08 -- nvmf-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:20.005 10:45:08 -- nvmf-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:20.005 10:45:08 -- nvmf-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:03:20.005 10:45:08 -- nvmf-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_NICS=mlx5 00:03:20.005 10:45:08 -- nvmf-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:03:20.005 10:45:08 -- nvmf-phy-autotest/autorun-spdk.conf@6 -- $ NET_TYPE=phy 00:03:20.005 10:45:08 -- nvmf-phy-autotest/autorun-spdk.conf@7 -- $ RUN_NIGHTLY=0 00:03:20.005 10:45:08 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:20.005 10:45:08 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:03:20.263 10:45:08 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:03:20.263 10:45:08 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:20.263 10:45:08 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:20.263 10:45:08 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:20.263 10:45:08 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:20.263 10:45:08 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:20.263 10:45:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.263 10:45:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.264 10:45:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.264 10:45:08 -- paths/export.sh@5 -- $ export PATH 00:03:20.264 10:45:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.264 10:45:08 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:20.264 10:45:08 -- common/autobuild_common.sh@486 -- $ date +%s 00:03:20.264 10:45:08 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731663908.XXXXXX 00:03:20.264 10:45:08 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731663908.Bgy50b 00:03:20.264 10:45:08 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:03:20.264 10:45:08 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:03:20.264 10:45:08 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:03:20.264 10:45:08 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:20.264 10:45:08 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:20.264 10:45:08 -- common/autobuild_common.sh@502 -- $ get_config_params 00:03:20.264 10:45:08 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:20.264 10:45:08 -- common/autotest_common.sh@10 -- $ set +x 00:03:20.264 10:45:08 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:03:20.264 10:45:08 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:03:20.264 10:45:08 -- pm/common@17 -- $ local monitor 00:03:20.264 10:45:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.264 10:45:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.264 10:45:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.264 10:45:08 -- pm/common@21 -- $ date +%s 00:03:20.264 10:45:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.264 10:45:08 -- pm/common@21 -- $ date +%s 00:03:20.264 10:45:08 -- pm/common@21 -- $ date +%s 00:03:20.264 10:45:08 -- pm/common@25 -- $ sleep 1 00:03:20.264 10:45:08 -- pm/common@21 -- $ date +%s 00:03:20.264 10:45:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731663908 00:03:20.264 10:45:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731663908 00:03:20.264 10:45:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731663908 00:03:20.264 10:45:08 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731663908 00:03:20.264 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731663908_collect-cpu-temp.pm.log 00:03:20.264 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731663908_collect-vmstat.pm.log 00:03:20.264 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731663908_collect-cpu-load.pm.log 00:03:20.264 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731663908_collect-bmc-pm.bmc.pm.log 00:03:21.199 10:45:09 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:03:21.199 10:45:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:21.199 10:45:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:21.199 10:45:09 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:21.199 10:45:09 -- spdk/autobuild.sh@16 -- $ date -u 00:03:21.199 Fri Nov 15 09:45:09 AM UTC 2024 00:03:21.199 10:45:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:21.199 v25.01-pre-208-g30279d1cf 00:03:21.199 10:45:09 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:21.199 10:45:09 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:21.199 10:45:09 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:21.199 10:45:09 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:21.199 10:45:09 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:21.199 10:45:09 -- common/autotest_common.sh@10 -- $ set +x 00:03:21.199 ************************************ 00:03:21.199 START TEST ubsan 00:03:21.199 ************************************ 00:03:21.199 10:45:10 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:03:21.199 using ubsan 00:03:21.199 00:03:21.199 real 0m0.000s 00:03:21.199 user 0m0.000s 00:03:21.199 sys 0m0.000s 00:03:21.199 10:45:10 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:21.199 10:45:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:21.199 ************************************ 00:03:21.199 END TEST ubsan 00:03:21.199 ************************************ 00:03:21.199 10:45:10 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:21.199 10:45:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:21.199 10:45:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:21.199 10:45:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:21.199 10:45:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:21.199 10:45:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:21.199 10:45:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:21.199 10:45:10 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:21.199 10:45:10 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:03:21.458 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:03:21.458 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:03:21.716 Using 'verbs' RDMA provider 00:03:34.486 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:44.565 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:44.823 Creating mk/config.mk...done. 00:03:44.823 Creating mk/cc.flags.mk...done. 00:03:44.823 Type 'make' to build. 00:03:44.823 10:45:33 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:03:44.823 10:45:33 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:44.823 10:45:33 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:44.823 10:45:33 -- common/autotest_common.sh@10 -- $ set +x 00:03:44.823 ************************************ 00:03:44.823 START TEST make 00:03:44.823 ************************************ 00:03:44.823 10:45:33 make -- common/autotest_common.sh@1127 -- $ make -j96 00:03:45.388 make[1]: Nothing to be done for 'all'. 00:03:53.522 The Meson build system 00:03:53.522 Version: 1.5.0 00:03:53.522 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:03:53.522 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:03:53.522 Build type: native build 00:03:53.522 Program cat found: YES (/usr/bin/cat) 00:03:53.522 Project name: DPDK 00:03:53.522 Project version: 24.03.0 00:03:53.522 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:53.522 C linker for the host machine: cc ld.bfd 2.40-14 00:03:53.522 Host machine cpu family: x86_64 00:03:53.522 Host machine cpu: x86_64 00:03:53.522 Message: ## Building in Developer Mode ## 00:03:53.522 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:53.522 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:53.522 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:53.523 Program python3 found: YES (/usr/bin/python3) 00:03:53.523 Program cat found: YES (/usr/bin/cat) 00:03:53.523 Compiler for C supports arguments -march=native: YES 00:03:53.523 Checking for size of "void *" : 8 00:03:53.523 Checking for size of "void *" : 8 (cached) 00:03:53.523 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:53.523 Library m found: YES 00:03:53.523 Library numa found: YES 00:03:53.523 Has header "numaif.h" : YES 00:03:53.523 Library fdt found: NO 00:03:53.523 Library execinfo found: NO 00:03:53.523 Has header "execinfo.h" : YES 00:03:53.523 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:53.523 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:53.523 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:53.523 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:53.523 Run-time dependency openssl found: YES 3.1.1 00:03:53.523 Run-time dependency libpcap found: YES 1.10.4 00:03:53.523 Has header "pcap.h" with dependency libpcap: YES 00:03:53.523 Compiler for C supports arguments -Wcast-qual: YES 00:03:53.523 Compiler for C supports arguments -Wdeprecated: YES 00:03:53.523 Compiler for C supports arguments -Wformat: YES 00:03:53.523 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:53.523 Compiler for C supports arguments -Wformat-security: NO 00:03:53.523 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:53.523 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:53.523 Compiler for C supports arguments -Wnested-externs: YES 00:03:53.523 Compiler for C supports arguments -Wold-style-definition: YES 00:03:53.523 Compiler for C supports arguments -Wpointer-arith: YES 00:03:53.523 Compiler for C supports arguments -Wsign-compare: YES 00:03:53.523 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:53.523 Compiler for C supports arguments -Wundef: YES 00:03:53.523 Compiler for C supports arguments -Wwrite-strings: YES 00:03:53.523 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:53.523 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:53.523 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:53.523 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:53.523 Program objdump found: YES (/usr/bin/objdump) 00:03:53.523 Compiler for C supports arguments -mavx512f: YES 00:03:53.523 Checking if "AVX512 checking" compiles: YES 00:03:53.523 Fetching value of define "__SSE4_2__" : 1 00:03:53.523 Fetching value of define "__AES__" : 1 00:03:53.523 Fetching value of define "__AVX__" : 1 00:03:53.523 Fetching value of define "__AVX2__" : 1 00:03:53.523 Fetching value of define "__AVX512BW__" : 1 00:03:53.523 Fetching value of define "__AVX512CD__" : 1 00:03:53.523 Fetching value of define "__AVX512DQ__" : 1 00:03:53.523 Fetching value of define "__AVX512F__" : 1 00:03:53.523 Fetching value of define "__AVX512VL__" : 1 00:03:53.523 Fetching value of define "__PCLMUL__" : 1 00:03:53.523 Fetching value of define "__RDRND__" : 1 00:03:53.523 Fetching value of define "__RDSEED__" : 1 00:03:53.523 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:53.523 Fetching value of define "__znver1__" : (undefined) 00:03:53.523 Fetching value of define "__znver2__" : (undefined) 00:03:53.523 Fetching value of define "__znver3__" : (undefined) 00:03:53.523 Fetching value of define "__znver4__" : (undefined) 00:03:53.523 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:53.523 Message: lib/log: Defining dependency "log" 00:03:53.523 Message: lib/kvargs: Defining dependency "kvargs" 00:03:53.523 Message: lib/telemetry: Defining dependency "telemetry" 00:03:53.523 Checking for function "getentropy" : NO 00:03:53.523 Message: lib/eal: Defining dependency "eal" 00:03:53.523 Message: lib/ring: Defining dependency "ring" 00:03:53.523 Message: lib/rcu: Defining dependency "rcu" 00:03:53.523 Message: lib/mempool: Defining dependency "mempool" 00:03:53.523 Message: lib/mbuf: Defining dependency "mbuf" 00:03:53.523 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:53.523 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:53.523 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:53.523 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:53.523 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:53.523 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:53.523 Compiler for C supports arguments -mpclmul: YES 00:03:53.523 Compiler for C supports arguments -maes: YES 00:03:53.523 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:53.523 Compiler for C supports arguments -mavx512bw: YES 00:03:53.523 Compiler for C supports arguments -mavx512dq: YES 00:03:53.523 Compiler for C supports arguments -mavx512vl: YES 00:03:53.523 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:53.523 Compiler for C supports arguments -mavx2: YES 00:03:53.523 Compiler for C supports arguments -mavx: YES 00:03:53.523 Message: lib/net: Defining dependency "net" 00:03:53.523 Message: lib/meter: Defining dependency "meter" 00:03:53.523 Message: lib/ethdev: Defining dependency "ethdev" 00:03:53.523 Message: lib/pci: Defining dependency "pci" 00:03:53.523 Message: lib/cmdline: Defining dependency "cmdline" 00:03:53.523 Message: lib/hash: Defining dependency "hash" 00:03:53.523 Message: lib/timer: Defining dependency "timer" 00:03:53.523 Message: lib/compressdev: Defining dependency "compressdev" 00:03:53.523 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:53.523 Message: lib/dmadev: Defining dependency "dmadev" 00:03:53.523 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:53.523 Message: lib/power: Defining dependency "power" 00:03:53.523 Message: lib/reorder: Defining dependency "reorder" 00:03:53.523 Message: lib/security: Defining dependency "security" 00:03:53.523 Has header "linux/userfaultfd.h" : YES 00:03:53.523 Has header "linux/vduse.h" : YES 00:03:53.523 Message: lib/vhost: Defining dependency "vhost" 00:03:53.523 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:53.523 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:53.523 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:53.523 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:53.523 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:53.523 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:53.523 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:53.523 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:53.523 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:53.523 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:53.523 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:53.523 Configuring doxy-api-html.conf using configuration 00:03:53.523 Configuring doxy-api-man.conf using configuration 00:03:53.523 Program mandb found: YES (/usr/bin/mandb) 00:03:53.523 Program sphinx-build found: NO 00:03:53.523 Configuring rte_build_config.h using configuration 00:03:53.523 Message: 00:03:53.523 ================= 00:03:53.523 Applications Enabled 00:03:53.523 ================= 00:03:53.523 00:03:53.523 apps: 00:03:53.523 00:03:53.523 00:03:53.523 Message: 00:03:53.523 ================= 00:03:53.523 Libraries Enabled 00:03:53.523 ================= 00:03:53.523 00:03:53.523 libs: 00:03:53.523 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:53.523 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:53.523 cryptodev, dmadev, power, reorder, security, vhost, 00:03:53.523 00:03:53.523 Message: 00:03:53.523 =============== 00:03:53.523 Drivers Enabled 00:03:53.523 =============== 00:03:53.523 00:03:53.523 common: 00:03:53.523 00:03:53.523 bus: 00:03:53.523 pci, vdev, 00:03:53.523 mempool: 00:03:53.523 ring, 00:03:53.523 dma: 00:03:53.523 00:03:53.523 net: 00:03:53.523 00:03:53.523 crypto: 00:03:53.523 00:03:53.523 compress: 00:03:53.523 00:03:53.523 vdpa: 00:03:53.523 00:03:53.523 00:03:53.523 Message: 00:03:53.523 ================= 00:03:53.523 Content Skipped 00:03:53.523 ================= 00:03:53.523 00:03:53.523 apps: 00:03:53.523 dumpcap: explicitly disabled via build config 00:03:53.523 graph: explicitly disabled via build config 00:03:53.523 pdump: explicitly disabled via build config 00:03:53.523 proc-info: explicitly disabled via build config 00:03:53.523 test-acl: explicitly disabled via build config 00:03:53.523 test-bbdev: explicitly disabled via build config 00:03:53.523 test-cmdline: explicitly disabled via build config 00:03:53.523 test-compress-perf: explicitly disabled via build config 00:03:53.523 test-crypto-perf: explicitly disabled via build config 00:03:53.523 test-dma-perf: explicitly disabled via build config 00:03:53.523 test-eventdev: explicitly disabled via build config 00:03:53.523 test-fib: explicitly disabled via build config 00:03:53.523 test-flow-perf: explicitly disabled via build config 00:03:53.523 test-gpudev: explicitly disabled via build config 00:03:53.523 test-mldev: explicitly disabled via build config 00:03:53.523 test-pipeline: explicitly disabled via build config 00:03:53.523 test-pmd: explicitly disabled via build config 00:03:53.523 test-regex: explicitly disabled via build config 00:03:53.523 test-sad: explicitly disabled via build config 00:03:53.523 test-security-perf: explicitly disabled via build config 00:03:53.523 00:03:53.523 libs: 00:03:53.523 argparse: explicitly disabled via build config 00:03:53.523 metrics: explicitly disabled via build config 00:03:53.523 acl: explicitly disabled via build config 00:03:53.523 bbdev: explicitly disabled via build config 00:03:53.523 bitratestats: explicitly disabled via build config 00:03:53.523 bpf: explicitly disabled via build config 00:03:53.523 cfgfile: explicitly disabled via build config 00:03:53.523 distributor: explicitly disabled via build config 00:03:53.523 efd: explicitly disabled via build config 00:03:53.523 eventdev: explicitly disabled via build config 00:03:53.523 dispatcher: explicitly disabled via build config 00:03:53.523 gpudev: explicitly disabled via build config 00:03:53.523 gro: explicitly disabled via build config 00:03:53.523 gso: explicitly disabled via build config 00:03:53.523 ip_frag: explicitly disabled via build config 00:03:53.523 jobstats: explicitly disabled via build config 00:03:53.523 latencystats: explicitly disabled via build config 00:03:53.523 lpm: explicitly disabled via build config 00:03:53.524 member: explicitly disabled via build config 00:03:53.524 pcapng: explicitly disabled via build config 00:03:53.524 rawdev: explicitly disabled via build config 00:03:53.524 regexdev: explicitly disabled via build config 00:03:53.524 mldev: explicitly disabled via build config 00:03:53.524 rib: explicitly disabled via build config 00:03:53.524 sched: explicitly disabled via build config 00:03:53.524 stack: explicitly disabled via build config 00:03:53.524 ipsec: explicitly disabled via build config 00:03:53.524 pdcp: explicitly disabled via build config 00:03:53.524 fib: explicitly disabled via build config 00:03:53.524 port: explicitly disabled via build config 00:03:53.524 pdump: explicitly disabled via build config 00:03:53.524 table: explicitly disabled via build config 00:03:53.524 pipeline: explicitly disabled via build config 00:03:53.524 graph: explicitly disabled via build config 00:03:53.524 node: explicitly disabled via build config 00:03:53.524 00:03:53.524 drivers: 00:03:53.524 common/cpt: not in enabled drivers build config 00:03:53.524 common/dpaax: not in enabled drivers build config 00:03:53.524 common/iavf: not in enabled drivers build config 00:03:53.524 common/idpf: not in enabled drivers build config 00:03:53.524 common/ionic: not in enabled drivers build config 00:03:53.524 common/mvep: not in enabled drivers build config 00:03:53.524 common/octeontx: not in enabled drivers build config 00:03:53.524 bus/auxiliary: not in enabled drivers build config 00:03:53.524 bus/cdx: not in enabled drivers build config 00:03:53.524 bus/dpaa: not in enabled drivers build config 00:03:53.524 bus/fslmc: not in enabled drivers build config 00:03:53.524 bus/ifpga: not in enabled drivers build config 00:03:53.524 bus/platform: not in enabled drivers build config 00:03:53.524 bus/uacce: not in enabled drivers build config 00:03:53.524 bus/vmbus: not in enabled drivers build config 00:03:53.524 common/cnxk: not in enabled drivers build config 00:03:53.524 common/mlx5: not in enabled drivers build config 00:03:53.524 common/nfp: not in enabled drivers build config 00:03:53.524 common/nitrox: not in enabled drivers build config 00:03:53.524 common/qat: not in enabled drivers build config 00:03:53.524 common/sfc_efx: not in enabled drivers build config 00:03:53.524 mempool/bucket: not in enabled drivers build config 00:03:53.524 mempool/cnxk: not in enabled drivers build config 00:03:53.524 mempool/dpaa: not in enabled drivers build config 00:03:53.524 mempool/dpaa2: not in enabled drivers build config 00:03:53.524 mempool/octeontx: not in enabled drivers build config 00:03:53.524 mempool/stack: not in enabled drivers build config 00:03:53.524 dma/cnxk: not in enabled drivers build config 00:03:53.524 dma/dpaa: not in enabled drivers build config 00:03:53.524 dma/dpaa2: not in enabled drivers build config 00:03:53.524 dma/hisilicon: not in enabled drivers build config 00:03:53.524 dma/idxd: not in enabled drivers build config 00:03:53.524 dma/ioat: not in enabled drivers build config 00:03:53.524 dma/skeleton: not in enabled drivers build config 00:03:53.524 net/af_packet: not in enabled drivers build config 00:03:53.524 net/af_xdp: not in enabled drivers build config 00:03:53.524 net/ark: not in enabled drivers build config 00:03:53.524 net/atlantic: not in enabled drivers build config 00:03:53.524 net/avp: not in enabled drivers build config 00:03:53.524 net/axgbe: not in enabled drivers build config 00:03:53.524 net/bnx2x: not in enabled drivers build config 00:03:53.524 net/bnxt: not in enabled drivers build config 00:03:53.524 net/bonding: not in enabled drivers build config 00:03:53.524 net/cnxk: not in enabled drivers build config 00:03:53.524 net/cpfl: not in enabled drivers build config 00:03:53.524 net/cxgbe: not in enabled drivers build config 00:03:53.524 net/dpaa: not in enabled drivers build config 00:03:53.524 net/dpaa2: not in enabled drivers build config 00:03:53.524 net/e1000: not in enabled drivers build config 00:03:53.524 net/ena: not in enabled drivers build config 00:03:53.524 net/enetc: not in enabled drivers build config 00:03:53.524 net/enetfec: not in enabled drivers build config 00:03:53.524 net/enic: not in enabled drivers build config 00:03:53.524 net/failsafe: not in enabled drivers build config 00:03:53.524 net/fm10k: not in enabled drivers build config 00:03:53.524 net/gve: not in enabled drivers build config 00:03:53.524 net/hinic: not in enabled drivers build config 00:03:53.524 net/hns3: not in enabled drivers build config 00:03:53.524 net/i40e: not in enabled drivers build config 00:03:53.524 net/iavf: not in enabled drivers build config 00:03:53.524 net/ice: not in enabled drivers build config 00:03:53.524 net/idpf: not in enabled drivers build config 00:03:53.524 net/igc: not in enabled drivers build config 00:03:53.524 net/ionic: not in enabled drivers build config 00:03:53.524 net/ipn3ke: not in enabled drivers build config 00:03:53.524 net/ixgbe: not in enabled drivers build config 00:03:53.524 net/mana: not in enabled drivers build config 00:03:53.524 net/memif: not in enabled drivers build config 00:03:53.524 net/mlx4: not in enabled drivers build config 00:03:53.524 net/mlx5: not in enabled drivers build config 00:03:53.524 net/mvneta: not in enabled drivers build config 00:03:53.524 net/mvpp2: not in enabled drivers build config 00:03:53.524 net/netvsc: not in enabled drivers build config 00:03:53.524 net/nfb: not in enabled drivers build config 00:03:53.524 net/nfp: not in enabled drivers build config 00:03:53.524 net/ngbe: not in enabled drivers build config 00:03:53.524 net/null: not in enabled drivers build config 00:03:53.524 net/octeontx: not in enabled drivers build config 00:03:53.524 net/octeon_ep: not in enabled drivers build config 00:03:53.524 net/pcap: not in enabled drivers build config 00:03:53.524 net/pfe: not in enabled drivers build config 00:03:53.524 net/qede: not in enabled drivers build config 00:03:53.524 net/ring: not in enabled drivers build config 00:03:53.524 net/sfc: not in enabled drivers build config 00:03:53.524 net/softnic: not in enabled drivers build config 00:03:53.524 net/tap: not in enabled drivers build config 00:03:53.524 net/thunderx: not in enabled drivers build config 00:03:53.524 net/txgbe: not in enabled drivers build config 00:03:53.524 net/vdev_netvsc: not in enabled drivers build config 00:03:53.524 net/vhost: not in enabled drivers build config 00:03:53.524 net/virtio: not in enabled drivers build config 00:03:53.524 net/vmxnet3: not in enabled drivers build config 00:03:53.524 raw/*: missing internal dependency, "rawdev" 00:03:53.524 crypto/armv8: not in enabled drivers build config 00:03:53.524 crypto/bcmfs: not in enabled drivers build config 00:03:53.524 crypto/caam_jr: not in enabled drivers build config 00:03:53.524 crypto/ccp: not in enabled drivers build config 00:03:53.524 crypto/cnxk: not in enabled drivers build config 00:03:53.524 crypto/dpaa_sec: not in enabled drivers build config 00:03:53.524 crypto/dpaa2_sec: not in enabled drivers build config 00:03:53.524 crypto/ipsec_mb: not in enabled drivers build config 00:03:53.524 crypto/mlx5: not in enabled drivers build config 00:03:53.524 crypto/mvsam: not in enabled drivers build config 00:03:53.524 crypto/nitrox: not in enabled drivers build config 00:03:53.524 crypto/null: not in enabled drivers build config 00:03:53.524 crypto/octeontx: not in enabled drivers build config 00:03:53.524 crypto/openssl: not in enabled drivers build config 00:03:53.524 crypto/scheduler: not in enabled drivers build config 00:03:53.524 crypto/uadk: not in enabled drivers build config 00:03:53.524 crypto/virtio: not in enabled drivers build config 00:03:53.524 compress/isal: not in enabled drivers build config 00:03:53.524 compress/mlx5: not in enabled drivers build config 00:03:53.524 compress/nitrox: not in enabled drivers build config 00:03:53.524 compress/octeontx: not in enabled drivers build config 00:03:53.524 compress/zlib: not in enabled drivers build config 00:03:53.524 regex/*: missing internal dependency, "regexdev" 00:03:53.524 ml/*: missing internal dependency, "mldev" 00:03:53.524 vdpa/ifc: not in enabled drivers build config 00:03:53.524 vdpa/mlx5: not in enabled drivers build config 00:03:53.524 vdpa/nfp: not in enabled drivers build config 00:03:53.524 vdpa/sfc: not in enabled drivers build config 00:03:53.524 event/*: missing internal dependency, "eventdev" 00:03:53.524 baseband/*: missing internal dependency, "bbdev" 00:03:53.524 gpu/*: missing internal dependency, "gpudev" 00:03:53.524 00:03:53.524 00:03:53.524 Build targets in project: 85 00:03:53.524 00:03:53.524 DPDK 24.03.0 00:03:53.524 00:03:53.524 User defined options 00:03:53.524 buildtype : debug 00:03:53.524 default_library : shared 00:03:53.524 libdir : lib 00:03:53.524 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:03:53.524 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:53.524 c_link_args : 00:03:53.524 cpu_instruction_set: native 00:03:53.524 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:03:53.524 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:03:53.524 enable_docs : false 00:03:53.524 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:53.524 enable_kmods : false 00:03:53.524 max_lcores : 128 00:03:53.524 tests : false 00:03:53.524 00:03:53.524 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:53.524 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:03:53.524 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:53.787 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:53.787 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:53.787 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:53.787 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:53.787 [6/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:53.787 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:53.787 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:53.787 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:53.787 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:53.787 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:53.787 [12/268] Linking static target lib/librte_kvargs.a 00:03:53.787 [13/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:53.787 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:53.787 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:53.787 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:53.787 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:53.787 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:53.787 [19/268] Linking static target lib/librte_log.a 00:03:53.787 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:53.787 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:54.047 [22/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:54.047 [23/268] Linking static target lib/librte_pci.a 00:03:54.047 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:54.047 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:54.047 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:54.047 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:54.047 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:54.047 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:54.047 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:54.047 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:54.047 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:54.047 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:54.047 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:54.047 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:54.047 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:54.047 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:54.047 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:54.047 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:54.047 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:54.047 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:54.047 [42/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:54.047 [43/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:54.047 [44/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:54.047 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:54.048 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:54.048 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:54.048 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:54.048 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:54.048 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:54.048 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:54.048 [52/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:54.048 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:54.306 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:54.306 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:54.306 [56/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:54.306 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:54.306 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:54.306 [59/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:54.306 [60/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:54.306 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:54.306 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:54.306 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:54.306 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:54.306 [65/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:54.306 [66/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:54.306 [67/268] Linking static target lib/librte_meter.a 00:03:54.306 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:54.306 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:54.306 [70/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:54.306 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:54.306 [72/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:54.306 [73/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:54.306 [74/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:54.306 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:54.306 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:54.306 [77/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:54.306 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:54.306 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:54.306 [80/268] Linking static target lib/librte_ring.a 00:03:54.306 [81/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.306 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:54.306 [83/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:54.306 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:54.306 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:54.306 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:54.306 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:54.306 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:54.306 [89/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:54.306 [90/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:54.306 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:54.306 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:54.306 [93/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:54.306 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:54.306 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:54.306 [96/268] Linking static target lib/librte_telemetry.a 00:03:54.306 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:54.306 [98/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:54.306 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:54.306 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:54.306 [101/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:54.306 [102/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:54.306 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:54.306 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:54.306 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:54.306 [106/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:54.306 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:54.306 [108/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:54.306 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:54.306 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:54.306 [111/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.306 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:54.306 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:54.306 [114/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:54.306 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:54.306 [116/268] Linking static target lib/librte_mempool.a 00:03:54.306 [117/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:54.306 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:54.306 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:54.306 [120/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:54.306 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:54.306 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:54.306 [123/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:54.306 [124/268] Linking static target lib/librte_cmdline.a 00:03:54.306 [125/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:54.306 [126/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:54.565 [127/268] Linking static target lib/librte_net.a 00:03:54.565 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:54.565 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:54.565 [130/268] Linking static target lib/librte_eal.a 00:03:54.565 [131/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:54.565 [132/268] Linking static target lib/librte_rcu.a 00:03:54.565 [133/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:54.565 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:54.565 [135/268] Linking static target lib/librte_mbuf.a 00:03:54.565 [136/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.565 [137/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:54.565 [138/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:54.565 [139/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.565 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:54.565 [141/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:54.565 [142/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.565 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:54.565 [144/268] Linking target lib/librte_log.so.24.1 00:03:54.565 [145/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:54.565 [146/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:54.565 [147/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:54.565 [148/268] Linking static target lib/librte_timer.a 00:03:54.565 [149/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:54.565 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:54.565 [151/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:54.565 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:54.565 [153/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:54.565 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:54.565 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:54.565 [156/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:54.565 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:54.565 [158/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:54.823 [159/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.823 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:54.823 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:54.823 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:54.823 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:54.823 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:54.823 [165/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.823 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:54.823 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:54.823 [168/268] Linking target lib/librte_kvargs.so.24.1 00:03:54.823 [169/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:54.823 [170/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:54.823 [171/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:54.823 [172/268] Linking target lib/librte_telemetry.so.24.1 00:03:54.823 [173/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:54.823 [174/268] Linking static target lib/librte_reorder.a 00:03:54.823 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:54.823 [176/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:54.823 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:54.823 [178/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:54.823 [179/268] Linking static target lib/librte_power.a 00:03:54.823 [180/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.823 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:54.823 [182/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:54.823 [183/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:54.823 [184/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:54.824 [185/268] Linking static target lib/librte_dmadev.a 00:03:54.824 [186/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:54.824 [187/268] Linking static target lib/librte_compressdev.a 00:03:54.824 [188/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:54.824 [189/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:54.824 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:54.824 [191/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:54.824 [192/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:54.824 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:54.824 [194/268] Linking static target drivers/librte_bus_vdev.a 00:03:54.824 [195/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:54.824 [196/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:54.824 [197/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:54.824 [198/268] Linking static target lib/librte_security.a 00:03:54.824 [199/268] Linking static target lib/librte_hash.a 00:03:55.081 [200/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:55.081 [201/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:55.081 [202/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:55.081 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:55.081 [204/268] Linking static target drivers/librte_mempool_ring.a 00:03:55.081 [205/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:55.081 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:55.081 [207/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.081 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:55.081 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:55.081 [210/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.081 [211/268] Linking static target drivers/librte_bus_pci.a 00:03:55.081 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:55.081 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.081 [214/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:55.081 [215/268] Linking static target lib/librte_cryptodev.a 00:03:55.081 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.339 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.339 [218/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.339 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.339 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.602 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.602 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:55.602 [223/268] Linking static target lib/librte_ethdev.a 00:03:55.602 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:55.602 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.860 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.860 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.791 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:56.791 [229/268] Linking static target lib/librte_vhost.a 00:03:57.049 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.420 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.677 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.933 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.933 [234/268] Linking target lib/librte_eal.so.24.1 00:04:03.933 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:04.191 [236/268] Linking target lib/librte_ring.so.24.1 00:04:04.191 [237/268] Linking target lib/librte_pci.so.24.1 00:04:04.191 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:04.191 [239/268] Linking target lib/librte_dmadev.so.24.1 00:04:04.191 [240/268] Linking target lib/librte_meter.so.24.1 00:04:04.191 [241/268] Linking target lib/librte_timer.so.24.1 00:04:04.191 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:04.191 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:04.191 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:04.191 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:04.191 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:04.191 [247/268] Linking target lib/librte_rcu.so.24.1 00:04:04.191 [248/268] Linking target lib/librte_mempool.so.24.1 00:04:04.191 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:04.448 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:04.448 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:04.448 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:04.448 [253/268] Linking target lib/librte_mbuf.so.24.1 00:04:04.448 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:04.448 [255/268] Linking target lib/librte_compressdev.so.24.1 00:04:04.448 [256/268] Linking target lib/librte_reorder.so.24.1 00:04:04.448 [257/268] Linking target lib/librte_net.so.24.1 00:04:04.448 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:04:04.706 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:04.706 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:04.706 [261/268] Linking target lib/librte_hash.so.24.1 00:04:04.706 [262/268] Linking target lib/librte_cmdline.so.24.1 00:04:04.706 [263/268] Linking target lib/librte_security.so.24.1 00:04:04.706 [264/268] Linking target lib/librte_ethdev.so.24.1 00:04:04.963 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:04.963 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:04.963 [267/268] Linking target lib/librte_power.so.24.1 00:04:04.963 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:04.963 INFO: autodetecting backend as ninja 00:04:04.963 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 96 00:04:17.161 CC lib/log/log.o 00:04:17.161 CC lib/log/log_flags.o 00:04:17.161 CC lib/log/log_deprecated.o 00:04:17.161 CC lib/ut_mock/mock.o 00:04:17.161 CC lib/ut/ut.o 00:04:17.161 LIB libspdk_ut.a 00:04:17.161 LIB libspdk_log.a 00:04:17.161 LIB libspdk_ut_mock.a 00:04:17.161 SO libspdk_ut.so.2.0 00:04:17.161 SO libspdk_ut_mock.so.6.0 00:04:17.161 SO libspdk_log.so.7.1 00:04:17.161 SYMLINK libspdk_ut.so 00:04:17.161 SYMLINK libspdk_ut_mock.so 00:04:17.161 SYMLINK libspdk_log.so 00:04:17.161 CC lib/ioat/ioat.o 00:04:17.161 CC lib/util/base64.o 00:04:17.161 CC lib/util/bit_array.o 00:04:17.161 CC lib/util/crc32.o 00:04:17.161 CC lib/util/cpuset.o 00:04:17.161 CC lib/util/crc16.o 00:04:17.161 CC lib/util/crc32c.o 00:04:17.161 CC lib/util/crc32_ieee.o 00:04:17.161 CC lib/util/fd.o 00:04:17.161 CC lib/util/crc64.o 00:04:17.161 CC lib/util/dif.o 00:04:17.161 CC lib/util/file.o 00:04:17.161 CC lib/util/fd_group.o 00:04:17.161 CC lib/util/hexlify.o 00:04:17.161 CC lib/util/iov.o 00:04:17.161 CC lib/util/math.o 00:04:17.161 CC lib/dma/dma.o 00:04:17.161 CC lib/util/net.o 00:04:17.161 CC lib/util/pipe.o 00:04:17.161 CC lib/util/strerror_tls.o 00:04:17.161 CC lib/util/string.o 00:04:17.161 CC lib/util/uuid.o 00:04:17.161 CC lib/util/xor.o 00:04:17.161 CC lib/util/zipf.o 00:04:17.161 CC lib/util/md5.o 00:04:17.161 CXX lib/trace_parser/trace.o 00:04:17.161 CC lib/vfio_user/host/vfio_user_pci.o 00:04:17.161 CC lib/vfio_user/host/vfio_user.o 00:04:17.161 LIB libspdk_dma.a 00:04:17.161 LIB libspdk_ioat.a 00:04:17.161 SO libspdk_dma.so.5.0 00:04:17.161 SO libspdk_ioat.so.7.0 00:04:17.161 SYMLINK libspdk_dma.so 00:04:17.161 SYMLINK libspdk_ioat.so 00:04:17.161 LIB libspdk_vfio_user.a 00:04:17.161 SO libspdk_vfio_user.so.5.0 00:04:17.161 SYMLINK libspdk_vfio_user.so 00:04:17.161 LIB libspdk_util.a 00:04:17.161 SO libspdk_util.so.10.1 00:04:17.161 SYMLINK libspdk_util.so 00:04:17.161 LIB libspdk_trace_parser.a 00:04:17.161 SO libspdk_trace_parser.so.6.0 00:04:17.161 SYMLINK libspdk_trace_parser.so 00:04:17.418 CC lib/rdma_utils/rdma_utils.o 00:04:17.418 CC lib/json/json_parse.o 00:04:17.418 CC lib/json/json_util.o 00:04:17.418 CC lib/json/json_write.o 00:04:17.418 CC lib/idxd/idxd.o 00:04:17.418 CC lib/idxd/idxd_user.o 00:04:17.418 CC lib/idxd/idxd_kernel.o 00:04:17.418 CC lib/env_dpdk/env.o 00:04:17.418 CC lib/env_dpdk/pci.o 00:04:17.418 CC lib/env_dpdk/memory.o 00:04:17.418 CC lib/env_dpdk/init.o 00:04:17.418 CC lib/env_dpdk/pci_virtio.o 00:04:17.418 CC lib/env_dpdk/threads.o 00:04:17.418 CC lib/env_dpdk/pci_ioat.o 00:04:17.418 CC lib/env_dpdk/pci_vmd.o 00:04:17.418 CC lib/env_dpdk/pci_idxd.o 00:04:17.418 CC lib/conf/conf.o 00:04:17.418 CC lib/vmd/vmd.o 00:04:17.418 CC lib/env_dpdk/pci_event.o 00:04:17.418 CC lib/env_dpdk/sigbus_handler.o 00:04:17.418 CC lib/env_dpdk/pci_dpdk.o 00:04:17.418 CC lib/vmd/led.o 00:04:17.418 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:17.418 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:17.418 LIB libspdk_rdma_utils.a 00:04:17.675 LIB libspdk_conf.a 00:04:17.675 SO libspdk_rdma_utils.so.1.0 00:04:17.675 LIB libspdk_json.a 00:04:17.675 SO libspdk_conf.so.6.0 00:04:17.675 SO libspdk_json.so.6.0 00:04:17.675 SYMLINK libspdk_rdma_utils.so 00:04:17.675 SYMLINK libspdk_conf.so 00:04:17.675 SYMLINK libspdk_json.so 00:04:17.675 LIB libspdk_idxd.a 00:04:17.675 SO libspdk_idxd.so.12.1 00:04:17.933 LIB libspdk_vmd.a 00:04:17.933 SYMLINK libspdk_idxd.so 00:04:17.933 SO libspdk_vmd.so.6.0 00:04:17.933 CC lib/rdma_provider/common.o 00:04:17.933 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:17.933 SYMLINK libspdk_vmd.so 00:04:17.933 CC lib/jsonrpc/jsonrpc_server.o 00:04:17.933 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:17.933 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:17.933 CC lib/jsonrpc/jsonrpc_client.o 00:04:18.190 LIB libspdk_rdma_provider.a 00:04:18.190 SO libspdk_rdma_provider.so.7.0 00:04:18.190 SYMLINK libspdk_rdma_provider.so 00:04:18.190 LIB libspdk_jsonrpc.a 00:04:18.190 SO libspdk_jsonrpc.so.6.0 00:04:18.190 SYMLINK libspdk_jsonrpc.so 00:04:18.447 LIB libspdk_env_dpdk.a 00:04:18.447 SO libspdk_env_dpdk.so.15.1 00:04:18.447 SYMLINK libspdk_env_dpdk.so 00:04:18.705 CC lib/rpc/rpc.o 00:04:18.705 LIB libspdk_rpc.a 00:04:18.705 SO libspdk_rpc.so.6.0 00:04:18.705 SYMLINK libspdk_rpc.so 00:04:19.270 CC lib/trace/trace.o 00:04:19.270 CC lib/trace/trace_flags.o 00:04:19.270 CC lib/trace/trace_rpc.o 00:04:19.270 CC lib/keyring/keyring.o 00:04:19.270 CC lib/keyring/keyring_rpc.o 00:04:19.270 CC lib/notify/notify.o 00:04:19.270 CC lib/notify/notify_rpc.o 00:04:19.270 LIB libspdk_notify.a 00:04:19.270 SO libspdk_notify.so.6.0 00:04:19.270 LIB libspdk_keyring.a 00:04:19.270 LIB libspdk_trace.a 00:04:19.270 SO libspdk_keyring.so.2.0 00:04:19.270 SYMLINK libspdk_notify.so 00:04:19.270 SO libspdk_trace.so.11.0 00:04:19.529 SYMLINK libspdk_keyring.so 00:04:19.529 SYMLINK libspdk_trace.so 00:04:19.786 CC lib/thread/thread.o 00:04:19.786 CC lib/thread/iobuf.o 00:04:19.786 CC lib/sock/sock.o 00:04:19.786 CC lib/sock/sock_rpc.o 00:04:20.044 LIB libspdk_sock.a 00:04:20.044 SO libspdk_sock.so.10.0 00:04:20.044 SYMLINK libspdk_sock.so 00:04:20.610 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:20.610 CC lib/nvme/nvme_ctrlr.o 00:04:20.610 CC lib/nvme/nvme_ns_cmd.o 00:04:20.610 CC lib/nvme/nvme_fabric.o 00:04:20.610 CC lib/nvme/nvme_ns.o 00:04:20.610 CC lib/nvme/nvme_pcie.o 00:04:20.610 CC lib/nvme/nvme_pcie_common.o 00:04:20.610 CC lib/nvme/nvme.o 00:04:20.610 CC lib/nvme/nvme_quirks.o 00:04:20.610 CC lib/nvme/nvme_qpair.o 00:04:20.610 CC lib/nvme/nvme_transport.o 00:04:20.610 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:20.610 CC lib/nvme/nvme_discovery.o 00:04:20.610 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:20.610 CC lib/nvme/nvme_tcp.o 00:04:20.610 CC lib/nvme/nvme_opal.o 00:04:20.610 CC lib/nvme/nvme_io_msg.o 00:04:20.610 CC lib/nvme/nvme_poll_group.o 00:04:20.610 CC lib/nvme/nvme_stubs.o 00:04:20.610 CC lib/nvme/nvme_zns.o 00:04:20.610 CC lib/nvme/nvme_auth.o 00:04:20.610 CC lib/nvme/nvme_cuse.o 00:04:20.610 CC lib/nvme/nvme_rdma.o 00:04:20.867 LIB libspdk_thread.a 00:04:20.867 SO libspdk_thread.so.11.0 00:04:20.867 SYMLINK libspdk_thread.so 00:04:21.125 CC lib/blob/request.o 00:04:21.125 CC lib/blob/blobstore.o 00:04:21.125 CC lib/blob/zeroes.o 00:04:21.125 CC lib/virtio/virtio.o 00:04:21.125 CC lib/blob/blob_bs_dev.o 00:04:21.125 CC lib/virtio/virtio_vhost_user.o 00:04:21.125 CC lib/virtio/virtio_pci.o 00:04:21.125 CC lib/virtio/virtio_vfio_user.o 00:04:21.125 CC lib/fsdev/fsdev.o 00:04:21.125 CC lib/accel/accel.o 00:04:21.125 CC lib/fsdev/fsdev_io.o 00:04:21.125 CC lib/accel/accel_rpc.o 00:04:21.125 CC lib/fsdev/fsdev_rpc.o 00:04:21.125 CC lib/accel/accel_sw.o 00:04:21.125 CC lib/init/json_config.o 00:04:21.125 CC lib/init/rpc.o 00:04:21.125 CC lib/init/subsystem.o 00:04:21.125 CC lib/init/subsystem_rpc.o 00:04:21.381 LIB libspdk_init.a 00:04:21.381 SO libspdk_init.so.6.0 00:04:21.381 LIB libspdk_virtio.a 00:04:21.381 SO libspdk_virtio.so.7.0 00:04:21.381 SYMLINK libspdk_init.so 00:04:21.637 SYMLINK libspdk_virtio.so 00:04:21.637 LIB libspdk_fsdev.a 00:04:21.637 SO libspdk_fsdev.so.2.0 00:04:21.637 CC lib/event/app.o 00:04:21.637 CC lib/event/log_rpc.o 00:04:21.637 CC lib/event/reactor.o 00:04:21.637 CC lib/event/scheduler_static.o 00:04:21.637 CC lib/event/app_rpc.o 00:04:21.637 SYMLINK libspdk_fsdev.so 00:04:21.894 LIB libspdk_accel.a 00:04:21.894 SO libspdk_accel.so.16.0 00:04:21.894 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:22.151 SYMLINK libspdk_accel.so 00:04:22.151 LIB libspdk_event.a 00:04:22.151 LIB libspdk_nvme.a 00:04:22.151 SO libspdk_event.so.14.0 00:04:22.151 SYMLINK libspdk_event.so 00:04:22.151 SO libspdk_nvme.so.15.0 00:04:22.408 CC lib/bdev/bdev.o 00:04:22.408 CC lib/bdev/bdev_rpc.o 00:04:22.408 CC lib/bdev/bdev_zone.o 00:04:22.408 CC lib/bdev/part.o 00:04:22.408 CC lib/bdev/scsi_nvme.o 00:04:22.408 SYMLINK libspdk_nvme.so 00:04:22.408 LIB libspdk_fuse_dispatcher.a 00:04:22.408 SO libspdk_fuse_dispatcher.so.1.0 00:04:22.664 SYMLINK libspdk_fuse_dispatcher.so 00:04:23.229 LIB libspdk_blob.a 00:04:23.229 SO libspdk_blob.so.11.0 00:04:23.487 SYMLINK libspdk_blob.so 00:04:23.744 CC lib/lvol/lvol.o 00:04:23.744 CC lib/blobfs/blobfs.o 00:04:23.744 CC lib/blobfs/tree.o 00:04:24.309 LIB libspdk_bdev.a 00:04:24.309 LIB libspdk_blobfs.a 00:04:24.309 SO libspdk_bdev.so.17.0 00:04:24.309 SO libspdk_blobfs.so.10.0 00:04:24.309 LIB libspdk_lvol.a 00:04:24.309 SYMLINK libspdk_bdev.so 00:04:24.309 SYMLINK libspdk_blobfs.so 00:04:24.309 SO libspdk_lvol.so.10.0 00:04:24.309 SYMLINK libspdk_lvol.so 00:04:24.567 CC lib/ftl/ftl_core.o 00:04:24.567 CC lib/ftl/ftl_init.o 00:04:24.567 CC lib/ftl/ftl_layout.o 00:04:24.567 CC lib/ftl/ftl_debug.o 00:04:24.567 CC lib/ftl/ftl_l2p.o 00:04:24.567 CC lib/ftl/ftl_io.o 00:04:24.567 CC lib/ftl/ftl_l2p_flat.o 00:04:24.567 CC lib/ftl/ftl_sb.o 00:04:24.567 CC lib/ftl/ftl_nv_cache.o 00:04:24.567 CC lib/ftl/ftl_band.o 00:04:24.567 CC lib/ftl/ftl_band_ops.o 00:04:24.567 CC lib/ftl/ftl_writer.o 00:04:24.567 CC lib/nvmf/ctrlr_discovery.o 00:04:24.567 CC lib/nvmf/ctrlr.o 00:04:24.567 CC lib/ftl/ftl_p2l.o 00:04:24.567 CC lib/ftl/ftl_rq.o 00:04:24.567 CC lib/ftl/ftl_reloc.o 00:04:24.567 CC lib/ftl/ftl_l2p_cache.o 00:04:24.567 CC lib/nvmf/nvmf.o 00:04:24.567 CC lib/nvmf/ctrlr_bdev.o 00:04:24.567 CC lib/nvmf/subsystem.o 00:04:24.567 CC lib/ftl/ftl_p2l_log.o 00:04:24.568 CC lib/ftl/mngt/ftl_mngt.o 00:04:24.568 CC lib/nvmf/nvmf_rpc.o 00:04:24.568 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:24.568 CC lib/nvmf/transport.o 00:04:24.568 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:24.568 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:24.568 CC lib/nvmf/tcp.o 00:04:24.568 CC lib/nvmf/stubs.o 00:04:24.568 CC lib/nvmf/auth.o 00:04:24.568 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:24.568 CC lib/nvmf/mdns_server.o 00:04:24.568 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:24.568 CC lib/nvmf/rdma.o 00:04:24.568 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:24.568 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:24.568 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:24.568 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:24.568 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:24.568 CC lib/ublk/ublk.o 00:04:24.568 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:24.568 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:24.568 CC lib/ublk/ublk_rpc.o 00:04:24.568 CC lib/ftl/utils/ftl_conf.o 00:04:24.568 CC lib/ftl/utils/ftl_md.o 00:04:24.568 CC lib/ftl/utils/ftl_mempool.o 00:04:24.568 CC lib/nbd/nbd.o 00:04:24.568 CC lib/nbd/nbd_rpc.o 00:04:24.568 CC lib/ftl/utils/ftl_bitmap.o 00:04:24.568 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:24.568 CC lib/ftl/utils/ftl_property.o 00:04:24.568 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:24.568 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:24.568 CC lib/scsi/dev.o 00:04:24.568 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:24.568 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:24.568 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:24.568 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:24.568 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:24.568 CC lib/scsi/port.o 00:04:24.568 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:24.568 CC lib/scsi/lun.o 00:04:24.568 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:24.568 CC lib/scsi/scsi_bdev.o 00:04:24.568 CC lib/scsi/scsi.o 00:04:24.568 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:24.568 CC lib/scsi/scsi_pr.o 00:04:24.568 CC lib/scsi/scsi_rpc.o 00:04:24.568 CC lib/scsi/task.o 00:04:24.568 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:24.568 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:24.568 CC lib/ftl/base/ftl_base_dev.o 00:04:24.568 CC lib/ftl/base/ftl_base_bdev.o 00:04:24.568 CC lib/ftl/ftl_trace.o 00:04:25.501 LIB libspdk_scsi.a 00:04:25.501 LIB libspdk_nbd.a 00:04:25.501 SO libspdk_scsi.so.9.0 00:04:25.501 SO libspdk_nbd.so.7.0 00:04:25.501 SYMLINK libspdk_nbd.so 00:04:25.501 LIB libspdk_ublk.a 00:04:25.501 SYMLINK libspdk_scsi.so 00:04:25.501 SO libspdk_ublk.so.3.0 00:04:25.501 SYMLINK libspdk_ublk.so 00:04:25.501 LIB libspdk_ftl.a 00:04:25.759 CC lib/vhost/vhost.o 00:04:25.759 CC lib/vhost/vhost_blk.o 00:04:25.759 CC lib/vhost/vhost_rpc.o 00:04:25.759 CC lib/vhost/rte_vhost_user.o 00:04:25.759 CC lib/vhost/vhost_scsi.o 00:04:25.759 CC lib/iscsi/conn.o 00:04:25.759 CC lib/iscsi/iscsi.o 00:04:25.759 CC lib/iscsi/init_grp.o 00:04:25.759 CC lib/iscsi/param.o 00:04:25.759 SO libspdk_ftl.so.9.0 00:04:25.759 CC lib/iscsi/portal_grp.o 00:04:25.759 CC lib/iscsi/tgt_node.o 00:04:25.759 CC lib/iscsi/iscsi_subsystem.o 00:04:25.759 CC lib/iscsi/iscsi_rpc.o 00:04:25.759 CC lib/iscsi/task.o 00:04:26.016 SYMLINK libspdk_ftl.so 00:04:26.274 LIB libspdk_nvmf.a 00:04:26.532 SO libspdk_nvmf.so.20.0 00:04:26.532 LIB libspdk_vhost.a 00:04:26.532 SO libspdk_vhost.so.8.0 00:04:26.532 SYMLINK libspdk_nvmf.so 00:04:26.532 SYMLINK libspdk_vhost.so 00:04:26.790 LIB libspdk_iscsi.a 00:04:26.790 SO libspdk_iscsi.so.8.0 00:04:26.790 SYMLINK libspdk_iscsi.so 00:04:27.356 CC module/env_dpdk/env_dpdk_rpc.o 00:04:27.356 CC module/accel/ioat/accel_ioat.o 00:04:27.356 CC module/accel/ioat/accel_ioat_rpc.o 00:04:27.356 CC module/keyring/file/keyring_rpc.o 00:04:27.356 CC module/keyring/file/keyring.o 00:04:27.356 CC module/accel/iaa/accel_iaa.o 00:04:27.356 CC module/accel/error/accel_error.o 00:04:27.356 CC module/accel/iaa/accel_iaa_rpc.o 00:04:27.356 CC module/accel/error/accel_error_rpc.o 00:04:27.614 CC module/scheduler/gscheduler/gscheduler.o 00:04:27.614 CC module/accel/dsa/accel_dsa.o 00:04:27.614 CC module/keyring/linux/keyring.o 00:04:27.614 CC module/accel/dsa/accel_dsa_rpc.o 00:04:27.614 CC module/keyring/linux/keyring_rpc.o 00:04:27.614 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:27.614 CC module/sock/posix/posix.o 00:04:27.614 LIB libspdk_env_dpdk_rpc.a 00:04:27.614 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:27.614 CC module/blob/bdev/blob_bdev.o 00:04:27.614 CC module/fsdev/aio/fsdev_aio.o 00:04:27.614 CC module/fsdev/aio/linux_aio_mgr.o 00:04:27.614 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:27.614 SO libspdk_env_dpdk_rpc.so.6.0 00:04:27.614 SYMLINK libspdk_env_dpdk_rpc.so 00:04:27.614 LIB libspdk_keyring_file.a 00:04:27.614 LIB libspdk_scheduler_gscheduler.a 00:04:27.614 LIB libspdk_accel_ioat.a 00:04:27.614 LIB libspdk_keyring_linux.a 00:04:27.614 SO libspdk_keyring_file.so.2.0 00:04:27.614 LIB libspdk_accel_error.a 00:04:27.614 SO libspdk_scheduler_gscheduler.so.4.0 00:04:27.614 SO libspdk_keyring_linux.so.1.0 00:04:27.614 LIB libspdk_scheduler_dpdk_governor.a 00:04:27.614 SO libspdk_accel_ioat.so.6.0 00:04:27.614 LIB libspdk_accel_iaa.a 00:04:27.614 SO libspdk_accel_error.so.2.0 00:04:27.614 SO libspdk_accel_iaa.so.3.0 00:04:27.614 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:27.614 SYMLINK libspdk_keyring_file.so 00:04:27.614 LIB libspdk_scheduler_dynamic.a 00:04:27.614 SYMLINK libspdk_scheduler_gscheduler.so 00:04:27.614 SYMLINK libspdk_keyring_linux.so 00:04:27.614 SYMLINK libspdk_accel_ioat.so 00:04:27.872 LIB libspdk_accel_dsa.a 00:04:27.872 SO libspdk_scheduler_dynamic.so.4.0 00:04:27.872 LIB libspdk_blob_bdev.a 00:04:27.872 SYMLINK libspdk_accel_error.so 00:04:27.872 SYMLINK libspdk_accel_iaa.so 00:04:27.872 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:27.872 SO libspdk_blob_bdev.so.11.0 00:04:27.872 SO libspdk_accel_dsa.so.5.0 00:04:27.872 SYMLINK libspdk_scheduler_dynamic.so 00:04:27.872 SYMLINK libspdk_blob_bdev.so 00:04:27.872 SYMLINK libspdk_accel_dsa.so 00:04:28.130 LIB libspdk_fsdev_aio.a 00:04:28.130 SO libspdk_fsdev_aio.so.1.0 00:04:28.130 LIB libspdk_sock_posix.a 00:04:28.130 SO libspdk_sock_posix.so.6.0 00:04:28.130 SYMLINK libspdk_fsdev_aio.so 00:04:28.130 SYMLINK libspdk_sock_posix.so 00:04:28.130 CC module/bdev/aio/bdev_aio.o 00:04:28.130 CC module/bdev/aio/bdev_aio_rpc.o 00:04:28.130 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:28.130 CC module/bdev/lvol/vbdev_lvol.o 00:04:28.130 CC module/bdev/malloc/bdev_malloc.o 00:04:28.130 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:28.387 CC module/blobfs/bdev/blobfs_bdev.o 00:04:28.387 CC module/bdev/iscsi/bdev_iscsi.o 00:04:28.387 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:28.387 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:28.387 CC module/bdev/raid/bdev_raid_rpc.o 00:04:28.387 CC module/bdev/raid/bdev_raid.o 00:04:28.387 CC module/bdev/raid/bdev_raid_sb.o 00:04:28.387 CC module/bdev/delay/vbdev_delay.o 00:04:28.387 CC module/bdev/raid/raid1.o 00:04:28.387 CC module/bdev/raid/raid0.o 00:04:28.387 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:28.387 CC module/bdev/raid/concat.o 00:04:28.387 CC module/bdev/error/vbdev_error.o 00:04:28.387 CC module/bdev/nvme/bdev_nvme.o 00:04:28.387 CC module/bdev/error/vbdev_error_rpc.o 00:04:28.387 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:28.387 CC module/bdev/nvme/vbdev_opal.o 00:04:28.387 CC module/bdev/nvme/nvme_rpc.o 00:04:28.387 CC module/bdev/nvme/bdev_mdns_client.o 00:04:28.387 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:28.387 CC module/bdev/split/vbdev_split.o 00:04:28.387 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:28.387 CC module/bdev/gpt/gpt.o 00:04:28.387 CC module/bdev/ftl/bdev_ftl.o 00:04:28.387 CC module/bdev/split/vbdev_split_rpc.o 00:04:28.387 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:28.387 CC module/bdev/gpt/vbdev_gpt.o 00:04:28.387 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:28.387 CC module/bdev/null/bdev_null.o 00:04:28.387 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:28.387 CC module/bdev/null/bdev_null_rpc.o 00:04:28.387 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:28.387 CC module/bdev/passthru/vbdev_passthru.o 00:04:28.387 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:28.387 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:28.387 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:28.387 LIB libspdk_blobfs_bdev.a 00:04:28.645 SO libspdk_blobfs_bdev.so.6.0 00:04:28.645 LIB libspdk_bdev_split.a 00:04:28.645 SO libspdk_bdev_split.so.6.0 00:04:28.645 LIB libspdk_bdev_gpt.a 00:04:28.645 LIB libspdk_bdev_error.a 00:04:28.645 LIB libspdk_bdev_null.a 00:04:28.645 SYMLINK libspdk_blobfs_bdev.so 00:04:28.645 LIB libspdk_bdev_passthru.a 00:04:28.645 LIB libspdk_bdev_ftl.a 00:04:28.645 SO libspdk_bdev_error.so.6.0 00:04:28.645 SO libspdk_bdev_gpt.so.6.0 00:04:28.645 LIB libspdk_bdev_zone_block.a 00:04:28.645 SO libspdk_bdev_null.so.6.0 00:04:28.645 LIB libspdk_bdev_aio.a 00:04:28.645 SO libspdk_bdev_passthru.so.6.0 00:04:28.645 SYMLINK libspdk_bdev_split.so 00:04:28.645 LIB libspdk_bdev_malloc.a 00:04:28.645 LIB libspdk_bdev_delay.a 00:04:28.645 SO libspdk_bdev_ftl.so.6.0 00:04:28.645 LIB libspdk_bdev_iscsi.a 00:04:28.645 SO libspdk_bdev_zone_block.so.6.0 00:04:28.645 SO libspdk_bdev_malloc.so.6.0 00:04:28.645 SO libspdk_bdev_aio.so.6.0 00:04:28.645 SYMLINK libspdk_bdev_gpt.so 00:04:28.645 SYMLINK libspdk_bdev_error.so 00:04:28.645 SO libspdk_bdev_iscsi.so.6.0 00:04:28.645 SO libspdk_bdev_delay.so.6.0 00:04:28.645 SYMLINK libspdk_bdev_null.so 00:04:28.645 SYMLINK libspdk_bdev_passthru.so 00:04:28.645 SYMLINK libspdk_bdev_ftl.so 00:04:28.645 SYMLINK libspdk_bdev_aio.so 00:04:28.645 SYMLINK libspdk_bdev_iscsi.so 00:04:28.645 SYMLINK libspdk_bdev_zone_block.so 00:04:28.645 SYMLINK libspdk_bdev_malloc.so 00:04:28.645 SYMLINK libspdk_bdev_delay.so 00:04:28.645 LIB libspdk_bdev_lvol.a 00:04:28.645 LIB libspdk_bdev_virtio.a 00:04:28.903 SO libspdk_bdev_lvol.so.6.0 00:04:28.903 SO libspdk_bdev_virtio.so.6.0 00:04:28.903 SYMLINK libspdk_bdev_lvol.so 00:04:28.903 SYMLINK libspdk_bdev_virtio.so 00:04:29.160 LIB libspdk_bdev_raid.a 00:04:29.160 SO libspdk_bdev_raid.so.6.0 00:04:29.160 SYMLINK libspdk_bdev_raid.so 00:04:30.094 LIB libspdk_bdev_nvme.a 00:04:30.094 SO libspdk_bdev_nvme.so.7.1 00:04:30.352 SYMLINK libspdk_bdev_nvme.so 00:04:30.917 CC module/event/subsystems/keyring/keyring.o 00:04:30.917 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:30.917 CC module/event/subsystems/vmd/vmd.o 00:04:30.917 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:30.917 CC module/event/subsystems/iobuf/iobuf.o 00:04:30.917 CC module/event/subsystems/fsdev/fsdev.o 00:04:30.917 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:30.917 CC module/event/subsystems/sock/sock.o 00:04:30.918 CC module/event/subsystems/scheduler/scheduler.o 00:04:30.918 LIB libspdk_event_vhost_blk.a 00:04:30.918 LIB libspdk_event_keyring.a 00:04:30.918 LIB libspdk_event_fsdev.a 00:04:30.918 LIB libspdk_event_vmd.a 00:04:30.918 SO libspdk_event_keyring.so.1.0 00:04:30.918 LIB libspdk_event_iobuf.a 00:04:30.918 SO libspdk_event_vhost_blk.so.3.0 00:04:30.918 LIB libspdk_event_sock.a 00:04:30.918 SO libspdk_event_fsdev.so.1.0 00:04:31.176 LIB libspdk_event_scheduler.a 00:04:31.176 SO libspdk_event_vmd.so.6.0 00:04:31.176 SO libspdk_event_iobuf.so.3.0 00:04:31.176 SO libspdk_event_sock.so.5.0 00:04:31.176 SO libspdk_event_scheduler.so.4.0 00:04:31.176 SYMLINK libspdk_event_keyring.so 00:04:31.176 SYMLINK libspdk_event_fsdev.so 00:04:31.176 SYMLINK libspdk_event_vhost_blk.so 00:04:31.176 SYMLINK libspdk_event_vmd.so 00:04:31.176 SYMLINK libspdk_event_iobuf.so 00:04:31.176 SYMLINK libspdk_event_sock.so 00:04:31.176 SYMLINK libspdk_event_scheduler.so 00:04:31.434 CC module/event/subsystems/accel/accel.o 00:04:31.691 LIB libspdk_event_accel.a 00:04:31.691 SO libspdk_event_accel.so.6.0 00:04:31.691 SYMLINK libspdk_event_accel.so 00:04:31.949 CC module/event/subsystems/bdev/bdev.o 00:04:32.206 LIB libspdk_event_bdev.a 00:04:32.206 SO libspdk_event_bdev.so.6.0 00:04:32.206 SYMLINK libspdk_event_bdev.so 00:04:32.463 CC module/event/subsystems/ublk/ublk.o 00:04:32.463 CC module/event/subsystems/nbd/nbd.o 00:04:32.463 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:32.463 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:32.463 CC module/event/subsystems/scsi/scsi.o 00:04:32.463 LIB libspdk_event_ublk.a 00:04:32.721 SO libspdk_event_ublk.so.3.0 00:04:32.721 LIB libspdk_event_nbd.a 00:04:32.721 LIB libspdk_event_scsi.a 00:04:32.721 SO libspdk_event_nbd.so.6.0 00:04:32.721 SO libspdk_event_scsi.so.6.0 00:04:32.721 SYMLINK libspdk_event_ublk.so 00:04:32.721 LIB libspdk_event_nvmf.a 00:04:32.721 SYMLINK libspdk_event_nbd.so 00:04:32.721 SYMLINK libspdk_event_scsi.so 00:04:32.721 SO libspdk_event_nvmf.so.6.0 00:04:32.721 SYMLINK libspdk_event_nvmf.so 00:04:33.005 CC module/event/subsystems/iscsi/iscsi.o 00:04:33.005 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:33.262 LIB libspdk_event_iscsi.a 00:04:33.262 LIB libspdk_event_vhost_scsi.a 00:04:33.262 SO libspdk_event_iscsi.so.6.0 00:04:33.262 SO libspdk_event_vhost_scsi.so.3.0 00:04:33.262 SYMLINK libspdk_event_vhost_scsi.so 00:04:33.262 SYMLINK libspdk_event_iscsi.so 00:04:33.518 SO libspdk.so.6.0 00:04:33.518 SYMLINK libspdk.so 00:04:33.785 CC app/spdk_top/spdk_top.o 00:04:33.785 CXX app/trace/trace.o 00:04:33.785 CC app/trace_record/trace_record.o 00:04:33.785 CC app/spdk_lspci/spdk_lspci.o 00:04:33.785 CC app/spdk_nvme_perf/perf.o 00:04:33.785 CC app/spdk_nvme_identify/identify.o 00:04:33.785 CC app/spdk_nvme_discover/discovery_aer.o 00:04:33.785 CC test/rpc_client/rpc_client_test.o 00:04:33.785 CC app/spdk_dd/spdk_dd.o 00:04:33.785 TEST_HEADER include/spdk/accel.h 00:04:33.785 TEST_HEADER include/spdk/assert.h 00:04:33.785 TEST_HEADER include/spdk/accel_module.h 00:04:33.786 TEST_HEADER include/spdk/barrier.h 00:04:33.786 CC app/nvmf_tgt/nvmf_main.o 00:04:33.786 CC app/iscsi_tgt/iscsi_tgt.o 00:04:33.786 TEST_HEADER include/spdk/bdev.h 00:04:33.786 TEST_HEADER include/spdk/base64.h 00:04:33.786 TEST_HEADER include/spdk/bdev_zone.h 00:04:33.786 TEST_HEADER include/spdk/bit_array.h 00:04:33.786 TEST_HEADER include/spdk/bdev_module.h 00:04:33.786 TEST_HEADER include/spdk/blob_bdev.h 00:04:33.786 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:33.786 TEST_HEADER include/spdk/bit_pool.h 00:04:33.786 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:33.786 TEST_HEADER include/spdk/blob.h 00:04:33.786 TEST_HEADER include/spdk/blobfs.h 00:04:33.786 TEST_HEADER include/spdk/conf.h 00:04:33.786 TEST_HEADER include/spdk/config.h 00:04:33.786 TEST_HEADER include/spdk/cpuset.h 00:04:33.786 TEST_HEADER include/spdk/crc32.h 00:04:33.786 TEST_HEADER include/spdk/crc16.h 00:04:33.786 TEST_HEADER include/spdk/crc64.h 00:04:33.786 TEST_HEADER include/spdk/dif.h 00:04:33.786 TEST_HEADER include/spdk/dma.h 00:04:33.786 TEST_HEADER include/spdk/endian.h 00:04:33.786 TEST_HEADER include/spdk/env.h 00:04:33.786 TEST_HEADER include/spdk/env_dpdk.h 00:04:33.786 TEST_HEADER include/spdk/fd_group.h 00:04:33.786 TEST_HEADER include/spdk/event.h 00:04:33.786 TEST_HEADER include/spdk/file.h 00:04:33.786 TEST_HEADER include/spdk/fd.h 00:04:33.786 TEST_HEADER include/spdk/fsdev.h 00:04:33.786 TEST_HEADER include/spdk/fsdev_module.h 00:04:33.786 TEST_HEADER include/spdk/ftl.h 00:04:33.786 CC app/spdk_tgt/spdk_tgt.o 00:04:33.786 TEST_HEADER include/spdk/gpt_spec.h 00:04:33.786 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:33.786 TEST_HEADER include/spdk/histogram_data.h 00:04:33.786 TEST_HEADER include/spdk/hexlify.h 00:04:33.786 TEST_HEADER include/spdk/idxd.h 00:04:33.786 TEST_HEADER include/spdk/init.h 00:04:33.786 TEST_HEADER include/spdk/idxd_spec.h 00:04:33.786 TEST_HEADER include/spdk/ioat.h 00:04:33.786 TEST_HEADER include/spdk/ioat_spec.h 00:04:33.786 TEST_HEADER include/spdk/json.h 00:04:33.786 TEST_HEADER include/spdk/iscsi_spec.h 00:04:33.786 TEST_HEADER include/spdk/keyring.h 00:04:33.786 TEST_HEADER include/spdk/jsonrpc.h 00:04:33.786 TEST_HEADER include/spdk/keyring_module.h 00:04:33.786 TEST_HEADER include/spdk/likely.h 00:04:33.786 TEST_HEADER include/spdk/md5.h 00:04:33.786 TEST_HEADER include/spdk/lvol.h 00:04:33.786 TEST_HEADER include/spdk/log.h 00:04:33.786 TEST_HEADER include/spdk/memory.h 00:04:33.786 TEST_HEADER include/spdk/nbd.h 00:04:33.786 TEST_HEADER include/spdk/mmio.h 00:04:33.786 TEST_HEADER include/spdk/notify.h 00:04:33.786 TEST_HEADER include/spdk/net.h 00:04:33.786 TEST_HEADER include/spdk/nvme.h 00:04:33.786 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:33.786 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:33.786 TEST_HEADER include/spdk/nvme_intel.h 00:04:33.786 TEST_HEADER include/spdk/nvme_spec.h 00:04:33.786 TEST_HEADER include/spdk/nvme_zns.h 00:04:33.786 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:33.786 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:33.786 TEST_HEADER include/spdk/nvmf.h 00:04:33.786 TEST_HEADER include/spdk/nvmf_spec.h 00:04:33.786 TEST_HEADER include/spdk/nvmf_transport.h 00:04:33.786 TEST_HEADER include/spdk/opal.h 00:04:33.786 TEST_HEADER include/spdk/opal_spec.h 00:04:33.786 TEST_HEADER include/spdk/pci_ids.h 00:04:33.786 TEST_HEADER include/spdk/queue.h 00:04:33.786 TEST_HEADER include/spdk/pipe.h 00:04:33.786 TEST_HEADER include/spdk/reduce.h 00:04:33.786 TEST_HEADER include/spdk/rpc.h 00:04:33.786 TEST_HEADER include/spdk/scheduler.h 00:04:33.786 TEST_HEADER include/spdk/scsi.h 00:04:33.786 TEST_HEADER include/spdk/stdinc.h 00:04:33.786 TEST_HEADER include/spdk/sock.h 00:04:33.786 TEST_HEADER include/spdk/scsi_spec.h 00:04:33.786 TEST_HEADER include/spdk/trace.h 00:04:33.786 TEST_HEADER include/spdk/thread.h 00:04:33.786 TEST_HEADER include/spdk/string.h 00:04:33.786 TEST_HEADER include/spdk/tree.h 00:04:33.786 TEST_HEADER include/spdk/ublk.h 00:04:33.786 TEST_HEADER include/spdk/util.h 00:04:33.786 TEST_HEADER include/spdk/trace_parser.h 00:04:33.786 TEST_HEADER include/spdk/uuid.h 00:04:33.786 TEST_HEADER include/spdk/version.h 00:04:33.786 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:33.786 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:33.786 TEST_HEADER include/spdk/vhost.h 00:04:33.786 TEST_HEADER include/spdk/xor.h 00:04:33.786 TEST_HEADER include/spdk/vmd.h 00:04:33.786 TEST_HEADER include/spdk/zipf.h 00:04:33.786 CXX test/cpp_headers/accel_module.o 00:04:33.786 CXX test/cpp_headers/accel.o 00:04:33.786 CXX test/cpp_headers/assert.o 00:04:33.786 CXX test/cpp_headers/base64.o 00:04:33.786 CXX test/cpp_headers/barrier.o 00:04:33.786 CXX test/cpp_headers/bdev.o 00:04:33.786 CXX test/cpp_headers/bdev_module.o 00:04:33.786 CXX test/cpp_headers/bit_array.o 00:04:33.786 CXX test/cpp_headers/bit_pool.o 00:04:33.786 CXX test/cpp_headers/bdev_zone.o 00:04:33.786 CXX test/cpp_headers/blob_bdev.o 00:04:33.786 CXX test/cpp_headers/blobfs.o 00:04:33.786 CXX test/cpp_headers/blob.o 00:04:33.786 CXX test/cpp_headers/blobfs_bdev.o 00:04:33.786 CXX test/cpp_headers/conf.o 00:04:33.786 CXX test/cpp_headers/config.o 00:04:33.786 CXX test/cpp_headers/cpuset.o 00:04:33.786 CXX test/cpp_headers/crc32.o 00:04:33.786 CXX test/cpp_headers/crc64.o 00:04:33.786 CXX test/cpp_headers/crc16.o 00:04:33.786 CXX test/cpp_headers/dma.o 00:04:33.786 CXX test/cpp_headers/dif.o 00:04:33.786 CXX test/cpp_headers/endian.o 00:04:33.786 CXX test/cpp_headers/env_dpdk.o 00:04:33.786 CXX test/cpp_headers/env.o 00:04:33.786 CXX test/cpp_headers/event.o 00:04:33.786 CXX test/cpp_headers/fd.o 00:04:33.786 CXX test/cpp_headers/fd_group.o 00:04:33.786 CXX test/cpp_headers/file.o 00:04:33.786 CXX test/cpp_headers/fsdev.o 00:04:33.786 CXX test/cpp_headers/fuse_dispatcher.o 00:04:33.786 CXX test/cpp_headers/ftl.o 00:04:33.786 CXX test/cpp_headers/fsdev_module.o 00:04:33.786 CXX test/cpp_headers/hexlify.o 00:04:33.786 CXX test/cpp_headers/gpt_spec.o 00:04:33.786 CXX test/cpp_headers/histogram_data.o 00:04:33.786 CXX test/cpp_headers/idxd_spec.o 00:04:33.786 CXX test/cpp_headers/init.o 00:04:33.786 CXX test/cpp_headers/idxd.o 00:04:33.786 CXX test/cpp_headers/ioat_spec.o 00:04:33.786 CXX test/cpp_headers/ioat.o 00:04:33.786 CXX test/cpp_headers/iscsi_spec.o 00:04:33.786 CXX test/cpp_headers/json.o 00:04:33.786 CXX test/cpp_headers/jsonrpc.o 00:04:33.786 CXX test/cpp_headers/likely.o 00:04:33.786 CXX test/cpp_headers/keyring.o 00:04:33.786 CXX test/cpp_headers/keyring_module.o 00:04:33.786 CXX test/cpp_headers/lvol.o 00:04:33.786 CXX test/cpp_headers/log.o 00:04:33.786 CXX test/cpp_headers/md5.o 00:04:33.786 CXX test/cpp_headers/memory.o 00:04:33.786 CXX test/cpp_headers/nbd.o 00:04:33.786 CXX test/cpp_headers/mmio.o 00:04:33.786 CXX test/cpp_headers/nvme.o 00:04:33.786 CXX test/cpp_headers/notify.o 00:04:33.786 CXX test/cpp_headers/net.o 00:04:33.786 CC examples/util/zipf/zipf.o 00:04:33.786 CXX test/cpp_headers/nvme_intel.o 00:04:33.786 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:33.786 CXX test/cpp_headers/nvme_ocssd.o 00:04:33.786 CC app/fio/nvme/fio_plugin.o 00:04:33.786 CXX test/cpp_headers/nvme_zns.o 00:04:33.786 CXX test/cpp_headers/nvmf_cmd.o 00:04:33.786 CXX test/cpp_headers/nvme_spec.o 00:04:33.786 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:33.786 CXX test/cpp_headers/nvmf.o 00:04:33.786 CXX test/cpp_headers/nvmf_transport.o 00:04:33.786 CC examples/ioat/verify/verify.o 00:04:33.786 CXX test/cpp_headers/nvmf_spec.o 00:04:33.786 CXX test/cpp_headers/opal.o 00:04:34.051 CC test/env/vtophys/vtophys.o 00:04:34.051 CC test/app/histogram_perf/histogram_perf.o 00:04:34.051 CC test/app/jsoncat/jsoncat.o 00:04:34.051 CC test/app/stub/stub.o 00:04:34.051 CC test/env/memory/memory_ut.o 00:04:34.051 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:34.051 CC test/thread/poller_perf/poller_perf.o 00:04:34.051 CC test/env/pci/pci_ut.o 00:04:34.051 CC app/fio/bdev/fio_plugin.o 00:04:34.051 CC examples/ioat/perf/perf.o 00:04:34.051 CC test/app/bdev_svc/bdev_svc.o 00:04:34.051 LINK spdk_lspci 00:04:34.051 CC test/dma/test_dma/test_dma.o 00:04:34.051 LINK rpc_client_test 00:04:34.051 LINK nvmf_tgt 00:04:34.051 LINK spdk_nvme_discover 00:04:34.314 LINK iscsi_tgt 00:04:34.314 CC test/env/mem_callbacks/mem_callbacks.o 00:04:34.314 LINK spdk_tgt 00:04:34.314 LINK spdk_trace_record 00:04:34.314 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:34.314 LINK interrupt_tgt 00:04:34.314 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:34.314 LINK zipf 00:04:34.582 LINK jsoncat 00:04:34.582 LINK poller_perf 00:04:34.582 LINK histogram_perf 00:04:34.582 LINK vtophys 00:04:34.582 CXX test/cpp_headers/opal_spec.o 00:04:34.582 CXX test/cpp_headers/pci_ids.o 00:04:34.582 CXX test/cpp_headers/pipe.o 00:04:34.582 LINK bdev_svc 00:04:34.582 CXX test/cpp_headers/queue.o 00:04:34.582 LINK verify 00:04:34.582 CXX test/cpp_headers/reduce.o 00:04:34.582 CXX test/cpp_headers/rpc.o 00:04:34.582 CXX test/cpp_headers/scheduler.o 00:04:34.582 CXX test/cpp_headers/scsi.o 00:04:34.582 CXX test/cpp_headers/scsi_spec.o 00:04:34.582 CXX test/cpp_headers/sock.o 00:04:34.582 CXX test/cpp_headers/stdinc.o 00:04:34.582 LINK env_dpdk_post_init 00:04:34.582 CXX test/cpp_headers/string.o 00:04:34.582 CXX test/cpp_headers/thread.o 00:04:34.582 CXX test/cpp_headers/trace.o 00:04:34.582 CXX test/cpp_headers/trace_parser.o 00:04:34.582 CXX test/cpp_headers/tree.o 00:04:34.582 LINK spdk_dd 00:04:34.582 CXX test/cpp_headers/ublk.o 00:04:34.582 CXX test/cpp_headers/util.o 00:04:34.582 CXX test/cpp_headers/uuid.o 00:04:34.582 CXX test/cpp_headers/version.o 00:04:34.582 CXX test/cpp_headers/vfio_user_pci.o 00:04:34.582 CXX test/cpp_headers/vfio_user_spec.o 00:04:34.582 LINK stub 00:04:34.582 CXX test/cpp_headers/vhost.o 00:04:34.582 CXX test/cpp_headers/vmd.o 00:04:34.582 CXX test/cpp_headers/xor.o 00:04:34.582 CXX test/cpp_headers/zipf.o 00:04:34.582 LINK ioat_perf 00:04:34.582 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:34.582 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:34.849 LINK spdk_trace 00:04:34.849 LINK pci_ut 00:04:34.849 LINK spdk_nvme 00:04:34.849 LINK spdk_bdev 00:04:34.849 CC examples/idxd/perf/perf.o 00:04:34.849 CC examples/sock/hello_world/hello_sock.o 00:04:34.849 CC examples/vmd/lsvmd/lsvmd.o 00:04:34.849 CC test/event/event_perf/event_perf.o 00:04:34.849 CC examples/vmd/led/led.o 00:04:34.849 CC test/event/reactor_perf/reactor_perf.o 00:04:34.849 CC examples/thread/thread/thread_ex.o 00:04:34.849 CC test/event/reactor/reactor.o 00:04:34.849 LINK spdk_nvme_perf 00:04:34.849 CC test/event/app_repeat/app_repeat.o 00:04:34.849 LINK test_dma 00:04:34.849 LINK nvme_fuzz 00:04:34.849 LINK mem_callbacks 00:04:35.107 CC test/event/scheduler/scheduler.o 00:04:35.107 LINK spdk_top 00:04:35.107 LINK vhost_fuzz 00:04:35.107 CC app/vhost/vhost.o 00:04:35.107 LINK lsvmd 00:04:35.107 LINK led 00:04:35.107 LINK event_perf 00:04:35.107 LINK spdk_nvme_identify 00:04:35.107 LINK reactor_perf 00:04:35.107 LINK reactor 00:04:35.107 LINK app_repeat 00:04:35.107 LINK hello_sock 00:04:35.107 LINK thread 00:04:35.107 LINK idxd_perf 00:04:35.107 LINK scheduler 00:04:35.365 LINK vhost 00:04:35.365 LINK memory_ut 00:04:35.365 CC test/nvme/startup/startup.o 00:04:35.365 CC test/nvme/overhead/overhead.o 00:04:35.365 CC test/nvme/reserve/reserve.o 00:04:35.365 CC test/nvme/reset/reset.o 00:04:35.365 CC test/nvme/sgl/sgl.o 00:04:35.365 CC test/nvme/fused_ordering/fused_ordering.o 00:04:35.365 CC test/nvme/aer/aer.o 00:04:35.365 CC test/nvme/simple_copy/simple_copy.o 00:04:35.365 CC test/nvme/cuse/cuse.o 00:04:35.365 CC test/nvme/connect_stress/connect_stress.o 00:04:35.365 CC test/nvme/err_injection/err_injection.o 00:04:35.365 CC test/nvme/compliance/nvme_compliance.o 00:04:35.365 CC test/accel/dif/dif.o 00:04:35.365 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:35.365 CC test/nvme/fdp/fdp.o 00:04:35.365 CC test/nvme/e2edp/nvme_dp.o 00:04:35.365 CC test/nvme/boot_partition/boot_partition.o 00:04:35.365 CC test/blobfs/mkfs/mkfs.o 00:04:35.622 CC test/lvol/esnap/esnap.o 00:04:35.622 CC examples/nvme/hello_world/hello_world.o 00:04:35.622 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:35.622 CC examples/nvme/hotplug/hotplug.o 00:04:35.622 CC examples/nvme/abort/abort.o 00:04:35.622 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:35.622 CC examples/nvme/arbitration/arbitration.o 00:04:35.622 CC examples/nvme/reconnect/reconnect.o 00:04:35.622 LINK startup 00:04:35.622 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:35.622 LINK connect_stress 00:04:35.622 LINK boot_partition 00:04:35.622 LINK err_injection 00:04:35.622 CC examples/accel/perf/accel_perf.o 00:04:35.622 LINK reserve 00:04:35.622 LINK doorbell_aers 00:04:35.622 LINK fused_ordering 00:04:35.622 LINK reset 00:04:35.622 CC examples/blob/cli/blobcli.o 00:04:35.622 LINK simple_copy 00:04:35.622 CC examples/blob/hello_world/hello_blob.o 00:04:35.622 LINK sgl 00:04:35.622 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:35.622 LINK mkfs 00:04:35.622 LINK overhead 00:04:35.622 LINK nvme_dp 00:04:35.622 LINK nvme_compliance 00:04:35.622 LINK aer 00:04:35.880 LINK pmr_persistence 00:04:35.880 LINK fdp 00:04:35.880 LINK hotplug 00:04:35.880 LINK hello_world 00:04:35.880 LINK cmb_copy 00:04:35.880 LINK iscsi_fuzz 00:04:35.880 LINK arbitration 00:04:35.880 LINK reconnect 00:04:35.880 LINK abort 00:04:35.880 LINK hello_blob 00:04:35.880 LINK hello_fsdev 00:04:35.880 LINK nvme_manage 00:04:36.139 LINK dif 00:04:36.139 LINK accel_perf 00:04:36.139 LINK blobcli 00:04:36.397 LINK cuse 00:04:36.397 CC examples/bdev/hello_world/hello_bdev.o 00:04:36.397 CC test/bdev/bdevio/bdevio.o 00:04:36.656 CC examples/bdev/bdevperf/bdevperf.o 00:04:36.656 LINK hello_bdev 00:04:36.915 LINK bdevio 00:04:37.174 LINK bdevperf 00:04:37.741 CC examples/nvmf/nvmf/nvmf.o 00:04:37.741 LINK nvmf 00:04:39.117 LINK esnap 00:04:39.375 00:04:39.375 real 0m54.500s 00:04:39.375 user 7m43.156s 00:04:39.375 sys 3m21.312s 00:04:39.375 10:46:28 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:39.375 10:46:28 make -- common/autotest_common.sh@10 -- $ set +x 00:04:39.375 ************************************ 00:04:39.375 END TEST make 00:04:39.375 ************************************ 00:04:39.375 10:46:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:39.375 10:46:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:39.375 10:46:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:39.375 10:46:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.375 10:46:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:39.375 10:46:28 -- pm/common@44 -- $ pid=1190734 00:04:39.375 10:46:28 -- pm/common@50 -- $ kill -TERM 1190734 00:04:39.375 10:46:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.375 10:46:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:39.375 10:46:28 -- pm/common@44 -- $ pid=1190736 00:04:39.375 10:46:28 -- pm/common@50 -- $ kill -TERM 1190736 00:04:39.375 10:46:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.375 10:46:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:39.375 10:46:28 -- pm/common@44 -- $ pid=1190738 00:04:39.375 10:46:28 -- pm/common@50 -- $ kill -TERM 1190738 00:04:39.375 10:46:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.375 10:46:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:39.375 10:46:28 -- pm/common@44 -- $ pid=1190764 00:04:39.375 10:46:28 -- pm/common@50 -- $ sudo -E kill -TERM 1190764 00:04:39.375 10:46:28 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:39.375 10:46:28 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:04:39.375 10:46:28 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:39.635 10:46:28 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:39.635 10:46:28 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:39.635 10:46:28 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:39.635 10:46:28 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.635 10:46:28 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.635 10:46:28 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.635 10:46:28 -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.635 10:46:28 -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.635 10:46:28 -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.635 10:46:28 -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.635 10:46:28 -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.635 10:46:28 -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.635 10:46:28 -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.635 10:46:28 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.635 10:46:28 -- scripts/common.sh@344 -- # case "$op" in 00:04:39.635 10:46:28 -- scripts/common.sh@345 -- # : 1 00:04:39.635 10:46:28 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.635 10:46:28 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.635 10:46:28 -- scripts/common.sh@365 -- # decimal 1 00:04:39.635 10:46:28 -- scripts/common.sh@353 -- # local d=1 00:04:39.635 10:46:28 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.635 10:46:28 -- scripts/common.sh@355 -- # echo 1 00:04:39.635 10:46:28 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.635 10:46:28 -- scripts/common.sh@366 -- # decimal 2 00:04:39.635 10:46:28 -- scripts/common.sh@353 -- # local d=2 00:04:39.635 10:46:28 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.635 10:46:28 -- scripts/common.sh@355 -- # echo 2 00:04:39.635 10:46:28 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.635 10:46:28 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.635 10:46:28 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.635 10:46:28 -- scripts/common.sh@368 -- # return 0 00:04:39.635 10:46:28 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.635 10:46:28 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:39.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.635 --rc genhtml_branch_coverage=1 00:04:39.635 --rc genhtml_function_coverage=1 00:04:39.635 --rc genhtml_legend=1 00:04:39.635 --rc geninfo_all_blocks=1 00:04:39.635 --rc geninfo_unexecuted_blocks=1 00:04:39.635 00:04:39.635 ' 00:04:39.635 10:46:28 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:39.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.635 --rc genhtml_branch_coverage=1 00:04:39.635 --rc genhtml_function_coverage=1 00:04:39.635 --rc genhtml_legend=1 00:04:39.635 --rc geninfo_all_blocks=1 00:04:39.635 --rc geninfo_unexecuted_blocks=1 00:04:39.635 00:04:39.635 ' 00:04:39.635 10:46:28 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:39.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.635 --rc genhtml_branch_coverage=1 00:04:39.635 --rc genhtml_function_coverage=1 00:04:39.635 --rc genhtml_legend=1 00:04:39.635 --rc geninfo_all_blocks=1 00:04:39.635 --rc geninfo_unexecuted_blocks=1 00:04:39.635 00:04:39.635 ' 00:04:39.635 10:46:28 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:39.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.635 --rc genhtml_branch_coverage=1 00:04:39.635 --rc genhtml_function_coverage=1 00:04:39.635 --rc genhtml_legend=1 00:04:39.635 --rc geninfo_all_blocks=1 00:04:39.635 --rc geninfo_unexecuted_blocks=1 00:04:39.635 00:04:39.635 ' 00:04:39.635 10:46:28 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:39.635 10:46:28 -- nvmf/common.sh@7 -- # uname -s 00:04:39.635 10:46:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.635 10:46:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.635 10:46:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.635 10:46:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.635 10:46:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.635 10:46:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.635 10:46:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.635 10:46:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.635 10:46:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:39.635 10:46:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:39.635 10:46:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:04:39.635 10:46:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:04:39.635 10:46:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:39.635 10:46:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:39.635 10:46:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:39.635 10:46:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:39.635 10:46:28 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:39.635 10:46:28 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:39.635 10:46:28 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:39.635 10:46:28 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.635 10:46:28 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.635 10:46:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.635 10:46:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.635 10:46:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.635 10:46:28 -- paths/export.sh@5 -- # export PATH 00:04:39.635 10:46:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.635 10:46:28 -- nvmf/common.sh@51 -- # : 0 00:04:39.635 10:46:28 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:39.635 10:46:28 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:39.635 10:46:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:39.635 10:46:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:39.635 10:46:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:39.635 10:46:28 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:39.635 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:39.635 10:46:28 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:39.635 10:46:28 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:39.635 10:46:28 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:39.635 10:46:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:39.635 10:46:28 -- spdk/autotest.sh@32 -- # uname -s 00:04:39.635 10:46:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:39.635 10:46:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:39.635 10:46:28 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:04:39.635 10:46:28 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:39.635 10:46:28 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:04:39.635 10:46:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:39.635 10:46:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:39.635 10:46:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:39.635 10:46:28 -- spdk/autotest.sh@48 -- # udevadm_pid=1252538 00:04:39.635 10:46:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:39.635 10:46:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:39.635 10:46:28 -- pm/common@17 -- # local monitor 00:04:39.635 10:46:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.635 10:46:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.635 10:46:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.635 10:46:28 -- pm/common@21 -- # date +%s 00:04:39.635 10:46:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.635 10:46:28 -- pm/common@21 -- # date +%s 00:04:39.635 10:46:28 -- pm/common@25 -- # sleep 1 00:04:39.635 10:46:28 -- pm/common@21 -- # date +%s 00:04:39.635 10:46:28 -- pm/common@21 -- # date +%s 00:04:39.635 10:46:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731663988 00:04:39.635 10:46:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731663988 00:04:39.635 10:46:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731663988 00:04:39.635 10:46:28 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731663988 00:04:39.635 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731663988_collect-vmstat.pm.log 00:04:39.636 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731663988_collect-cpu-load.pm.log 00:04:39.636 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731663988_collect-cpu-temp.pm.log 00:04:39.636 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731663988_collect-bmc-pm.bmc.pm.log 00:04:40.571 10:46:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:40.571 10:46:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:40.571 10:46:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:40.571 10:46:29 -- common/autotest_common.sh@10 -- # set +x 00:04:40.571 10:46:29 -- spdk/autotest.sh@59 -- # create_test_list 00:04:40.571 10:46:29 -- common/autotest_common.sh@750 -- # xtrace_disable 00:04:40.571 10:46:29 -- common/autotest_common.sh@10 -- # set +x 00:04:40.571 10:46:29 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:04:40.571 10:46:29 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:40.571 10:46:29 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:40.571 10:46:29 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:04:40.571 10:46:29 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:40.571 10:46:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:40.571 10:46:29 -- common/autotest_common.sh@1455 -- # uname 00:04:40.571 10:46:29 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:40.571 10:46:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:40.571 10:46:29 -- common/autotest_common.sh@1475 -- # uname 00:04:40.571 10:46:29 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:40.571 10:46:29 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:40.571 10:46:29 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:40.830 lcov: LCOV version 1.15 00:04:40.830 10:46:29 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:04:53.174 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:53.174 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:03.145 10:46:51 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:03.145 10:46:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:03.145 10:46:51 -- common/autotest_common.sh@10 -- # set +x 00:05:03.145 10:46:51 -- spdk/autotest.sh@78 -- # rm -f 00:05:03.145 10:46:51 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:05.678 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:05:05.678 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:05:05.678 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:05:05.678 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:05:05.678 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:05:05.678 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:05:05.678 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:05:05.678 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:05:05.678 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:05:05.678 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:05:05.678 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:05:05.938 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:05:05.938 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:05:05.938 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:05:05.938 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:05:05.938 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:05:05.938 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:05:05.938 10:46:54 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:05.938 10:46:54 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:05.938 10:46:54 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:05.938 10:46:54 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:05.938 10:46:54 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:05.938 10:46:54 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:05.938 10:46:54 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:05.938 10:46:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:05.938 10:46:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:05.938 10:46:54 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:05.938 10:46:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:05.938 10:46:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:05.938 10:46:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:05.938 10:46:54 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:05.938 10:46:54 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:05.938 No valid GPT data, bailing 00:05:05.938 10:46:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:05.938 10:46:54 -- scripts/common.sh@394 -- # pt= 00:05:05.938 10:46:54 -- scripts/common.sh@395 -- # return 1 00:05:05.938 10:46:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:05.938 1+0 records in 00:05:05.938 1+0 records out 00:05:05.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00531157 s, 197 MB/s 00:05:05.938 10:46:54 -- spdk/autotest.sh@105 -- # sync 00:05:05.938 10:46:54 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:05.938 10:46:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:05.938 10:46:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:11.205 10:46:59 -- spdk/autotest.sh@111 -- # uname -s 00:05:11.205 10:46:59 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:11.205 10:46:59 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:11.205 10:46:59 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:05:13.738 Hugepages 00:05:13.738 node hugesize free / total 00:05:13.738 node0 1048576kB 0 / 0 00:05:13.738 node0 2048kB 0 / 0 00:05:13.738 node1 1048576kB 0 / 0 00:05:13.738 node1 2048kB 0 / 0 00:05:13.738 00:05:13.738 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:13.738 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:13.738 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:13.738 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:13.738 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:13.738 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:13.738 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:13.738 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:13.738 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:13.738 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:13.738 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:13.738 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:13.738 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:13.738 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:13.738 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:13.738 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:13.738 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:13.738 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:13.738 10:47:02 -- spdk/autotest.sh@117 -- # uname -s 00:05:13.738 10:47:02 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:13.738 10:47:02 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:13.738 10:47:02 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:16.271 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:16.272 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:16.530 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:16.530 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:16.530 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:16.530 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:16.530 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:16.530 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:16.530 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:16.530 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:16.530 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:16.530 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:16.530 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:16.530 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:16.530 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:16.530 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:17.467 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:17.467 10:47:06 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:18.401 10:47:07 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:18.401 10:47:07 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:18.401 10:47:07 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:18.401 10:47:07 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:18.401 10:47:07 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:18.401 10:47:07 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:18.401 10:47:07 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:18.401 10:47:07 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:18.401 10:47:07 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:18.659 10:47:07 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:18.659 10:47:07 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:05:18.659 10:47:07 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:21.191 Waiting for block devices as requested 00:05:21.191 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:05:21.191 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:21.449 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:21.449 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:21.449 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:21.708 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:21.708 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:21.708 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:21.708 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:21.967 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:21.967 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:21.967 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:21.967 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:22.225 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:22.225 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:22.225 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:22.484 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:22.484 10:47:11 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:22.484 10:47:11 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:05:22.484 10:47:11 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:05:22.484 10:47:11 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:05:22.484 10:47:11 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:22.484 10:47:11 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:05:22.484 10:47:11 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:22.484 10:47:11 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:22.484 10:47:11 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:22.484 10:47:11 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:22.484 10:47:11 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:22.484 10:47:11 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:22.484 10:47:11 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:22.484 10:47:11 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:05:22.484 10:47:11 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:22.484 10:47:11 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:22.484 10:47:11 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:22.484 10:47:11 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:22.484 10:47:11 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:22.484 10:47:11 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:22.484 10:47:11 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:22.484 10:47:11 -- common/autotest_common.sh@1541 -- # continue 00:05:22.484 10:47:11 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:22.484 10:47:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:22.484 10:47:11 -- common/autotest_common.sh@10 -- # set +x 00:05:22.484 10:47:11 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:22.484 10:47:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:22.484 10:47:11 -- common/autotest_common.sh@10 -- # set +x 00:05:22.484 10:47:11 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:25.767 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:25.767 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:25.767 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:25.767 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:25.767 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:25.767 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:25.767 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:25.767 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:25.767 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:25.768 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:25.768 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:25.768 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:25.768 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:25.768 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:25.768 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:25.768 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:26.334 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:26.334 10:47:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:26.334 10:47:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:26.334 10:47:15 -- common/autotest_common.sh@10 -- # set +x 00:05:26.334 10:47:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:26.334 10:47:15 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:26.334 10:47:15 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:26.334 10:47:15 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:26.334 10:47:15 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:26.334 10:47:15 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:26.334 10:47:15 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:26.334 10:47:15 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:26.334 10:47:15 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:26.334 10:47:15 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:26.334 10:47:15 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:26.334 10:47:15 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:26.334 10:47:15 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:26.334 10:47:15 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:26.334 10:47:15 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:05:26.334 10:47:15 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:26.334 10:47:15 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:26.334 10:47:15 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:05:26.334 10:47:15 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:26.334 10:47:15 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:05:26.334 10:47:15 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:05:26.335 10:47:15 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:05:26.335 10:47:15 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:05:26.335 10:47:15 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1266417 00:05:26.335 10:47:15 -- common/autotest_common.sh@1583 -- # waitforlisten 1266417 00:05:26.335 10:47:15 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.335 10:47:15 -- common/autotest_common.sh@833 -- # '[' -z 1266417 ']' 00:05:26.335 10:47:15 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.335 10:47:15 -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:26.335 10:47:15 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.335 10:47:15 -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:26.335 10:47:15 -- common/autotest_common.sh@10 -- # set +x 00:05:26.593 [2024-11-15 10:47:15.248894] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:05:26.594 [2024-11-15 10:47:15.248940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1266417 ] 00:05:26.594 [2024-11-15 10:47:15.312116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.594 [2024-11-15 10:47:15.352866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.852 10:47:15 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:26.852 10:47:15 -- common/autotest_common.sh@866 -- # return 0 00:05:26.852 10:47:15 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:26.852 10:47:15 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:26.852 10:47:15 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:30.133 nvme0n1 00:05:30.133 10:47:18 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:30.133 [2024-11-15 10:47:18.734629] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:30.133 request: 00:05:30.133 { 00:05:30.133 "nvme_ctrlr_name": "nvme0", 00:05:30.133 "password": "test", 00:05:30.133 "method": "bdev_nvme_opal_revert", 00:05:30.133 "req_id": 1 00:05:30.133 } 00:05:30.133 Got JSON-RPC error response 00:05:30.133 response: 00:05:30.133 { 00:05:30.133 "code": -32602, 00:05:30.133 "message": "Invalid parameters" 00:05:30.133 } 00:05:30.133 10:47:18 -- common/autotest_common.sh@1589 -- # true 00:05:30.133 10:47:18 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:30.133 10:47:18 -- common/autotest_common.sh@1593 -- # killprocess 1266417 00:05:30.133 10:47:18 -- common/autotest_common.sh@952 -- # '[' -z 1266417 ']' 00:05:30.133 10:47:18 -- common/autotest_common.sh@956 -- # kill -0 1266417 00:05:30.133 10:47:18 -- common/autotest_common.sh@957 -- # uname 00:05:30.133 10:47:18 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:30.133 10:47:18 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1266417 00:05:30.133 10:47:18 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:30.133 10:47:18 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:30.133 10:47:18 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1266417' 00:05:30.133 killing process with pid 1266417 00:05:30.133 10:47:18 -- common/autotest_common.sh@971 -- # kill 1266417 00:05:30.133 10:47:18 -- common/autotest_common.sh@976 -- # wait 1266417 00:05:31.508 10:47:20 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:31.508 10:47:20 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:31.508 10:47:20 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:31.766 10:47:20 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:31.766 10:47:20 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:31.766 10:47:20 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:31.766 10:47:20 -- common/autotest_common.sh@10 -- # set +x 00:05:31.766 10:47:20 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:31.766 10:47:20 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:31.766 10:47:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:31.766 10:47:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.766 10:47:20 -- common/autotest_common.sh@10 -- # set +x 00:05:31.766 ************************************ 00:05:31.766 START TEST env 00:05:31.766 ************************************ 00:05:31.767 10:47:20 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:31.767 * Looking for test storage... 00:05:31.767 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:31.767 10:47:20 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:31.767 10:47:20 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:31.767 10:47:20 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:31.767 10:47:20 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:31.767 10:47:20 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.767 10:47:20 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.767 10:47:20 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.767 10:47:20 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.767 10:47:20 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.767 10:47:20 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.767 10:47:20 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.767 10:47:20 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.767 10:47:20 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.767 10:47:20 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.767 10:47:20 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.767 10:47:20 env -- scripts/common.sh@344 -- # case "$op" in 00:05:31.767 10:47:20 env -- scripts/common.sh@345 -- # : 1 00:05:31.767 10:47:20 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.767 10:47:20 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.767 10:47:20 env -- scripts/common.sh@365 -- # decimal 1 00:05:31.767 10:47:20 env -- scripts/common.sh@353 -- # local d=1 00:05:31.767 10:47:20 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.767 10:47:20 env -- scripts/common.sh@355 -- # echo 1 00:05:31.767 10:47:20 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.767 10:47:20 env -- scripts/common.sh@366 -- # decimal 2 00:05:31.767 10:47:20 env -- scripts/common.sh@353 -- # local d=2 00:05:31.767 10:47:20 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.767 10:47:20 env -- scripts/common.sh@355 -- # echo 2 00:05:31.767 10:47:20 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.767 10:47:20 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.767 10:47:20 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.767 10:47:20 env -- scripts/common.sh@368 -- # return 0 00:05:31.767 10:47:20 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.767 10:47:20 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:31.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.767 --rc genhtml_branch_coverage=1 00:05:31.767 --rc genhtml_function_coverage=1 00:05:31.767 --rc genhtml_legend=1 00:05:31.767 --rc geninfo_all_blocks=1 00:05:31.767 --rc geninfo_unexecuted_blocks=1 00:05:31.767 00:05:31.767 ' 00:05:31.767 10:47:20 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:31.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.767 --rc genhtml_branch_coverage=1 00:05:31.767 --rc genhtml_function_coverage=1 00:05:31.767 --rc genhtml_legend=1 00:05:31.767 --rc geninfo_all_blocks=1 00:05:31.767 --rc geninfo_unexecuted_blocks=1 00:05:31.767 00:05:31.767 ' 00:05:31.767 10:47:20 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:31.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.767 --rc genhtml_branch_coverage=1 00:05:31.767 --rc genhtml_function_coverage=1 00:05:31.767 --rc genhtml_legend=1 00:05:31.767 --rc geninfo_all_blocks=1 00:05:31.767 --rc geninfo_unexecuted_blocks=1 00:05:31.767 00:05:31.767 ' 00:05:31.767 10:47:20 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:31.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.767 --rc genhtml_branch_coverage=1 00:05:31.767 --rc genhtml_function_coverage=1 00:05:31.767 --rc genhtml_legend=1 00:05:31.767 --rc geninfo_all_blocks=1 00:05:31.767 --rc geninfo_unexecuted_blocks=1 00:05:31.767 00:05:31.767 ' 00:05:31.767 10:47:20 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:31.767 10:47:20 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:31.767 10:47:20 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.767 10:47:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.767 ************************************ 00:05:31.767 START TEST env_memory 00:05:31.767 ************************************ 00:05:31.767 10:47:20 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:31.767 00:05:31.767 00:05:31.767 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.767 http://cunit.sourceforge.net/ 00:05:31.767 00:05:31.767 00:05:31.767 Suite: memory 00:05:32.026 Test: alloc and free memory map ...[2024-11-15 10:47:20.673069] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:32.026 passed 00:05:32.026 Test: mem map translation ...[2024-11-15 10:47:20.692159] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:32.026 [2024-11-15 10:47:20.692181] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:32.026 [2024-11-15 10:47:20.692215] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:32.026 [2024-11-15 10:47:20.692222] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:32.026 passed 00:05:32.026 Test: mem map registration ...[2024-11-15 10:47:20.728951] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:32.026 [2024-11-15 10:47:20.728965] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:32.026 passed 00:05:32.026 Test: mem map adjacent registrations ...passed 00:05:32.026 00:05:32.026 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.026 suites 1 1 n/a 0 0 00:05:32.026 tests 4 4 4 0 0 00:05:32.026 asserts 152 152 152 0 n/a 00:05:32.026 00:05:32.026 Elapsed time = 0.136 seconds 00:05:32.026 00:05:32.026 real 0m0.149s 00:05:32.026 user 0m0.141s 00:05:32.026 sys 0m0.007s 00:05:32.026 10:47:20 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:32.026 10:47:20 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:32.026 ************************************ 00:05:32.026 END TEST env_memory 00:05:32.026 ************************************ 00:05:32.026 10:47:20 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:32.026 10:47:20 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:32.026 10:47:20 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:32.026 10:47:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.026 ************************************ 00:05:32.026 START TEST env_vtophys 00:05:32.026 ************************************ 00:05:32.026 10:47:20 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:32.026 EAL: lib.eal log level changed from notice to debug 00:05:32.026 EAL: Detected lcore 0 as core 0 on socket 0 00:05:32.026 EAL: Detected lcore 1 as core 1 on socket 0 00:05:32.026 EAL: Detected lcore 2 as core 2 on socket 0 00:05:32.026 EAL: Detected lcore 3 as core 3 on socket 0 00:05:32.026 EAL: Detected lcore 4 as core 4 on socket 0 00:05:32.026 EAL: Detected lcore 5 as core 5 on socket 0 00:05:32.026 EAL: Detected lcore 6 as core 6 on socket 0 00:05:32.026 EAL: Detected lcore 7 as core 8 on socket 0 00:05:32.026 EAL: Detected lcore 8 as core 9 on socket 0 00:05:32.026 EAL: Detected lcore 9 as core 10 on socket 0 00:05:32.026 EAL: Detected lcore 10 as core 11 on socket 0 00:05:32.026 EAL: Detected lcore 11 as core 12 on socket 0 00:05:32.026 EAL: Detected lcore 12 as core 13 on socket 0 00:05:32.026 EAL: Detected lcore 13 as core 16 on socket 0 00:05:32.026 EAL: Detected lcore 14 as core 17 on socket 0 00:05:32.026 EAL: Detected lcore 15 as core 18 on socket 0 00:05:32.026 EAL: Detected lcore 16 as core 19 on socket 0 00:05:32.026 EAL: Detected lcore 17 as core 20 on socket 0 00:05:32.026 EAL: Detected lcore 18 as core 21 on socket 0 00:05:32.026 EAL: Detected lcore 19 as core 25 on socket 0 00:05:32.026 EAL: Detected lcore 20 as core 26 on socket 0 00:05:32.026 EAL: Detected lcore 21 as core 27 on socket 0 00:05:32.026 EAL: Detected lcore 22 as core 28 on socket 0 00:05:32.026 EAL: Detected lcore 23 as core 29 on socket 0 00:05:32.026 EAL: Detected lcore 24 as core 0 on socket 1 00:05:32.026 EAL: Detected lcore 25 as core 1 on socket 1 00:05:32.026 EAL: Detected lcore 26 as core 2 on socket 1 00:05:32.026 EAL: Detected lcore 27 as core 3 on socket 1 00:05:32.026 EAL: Detected lcore 28 as core 4 on socket 1 00:05:32.026 EAL: Detected lcore 29 as core 5 on socket 1 00:05:32.026 EAL: Detected lcore 30 as core 6 on socket 1 00:05:32.026 EAL: Detected lcore 31 as core 8 on socket 1 00:05:32.026 EAL: Detected lcore 32 as core 9 on socket 1 00:05:32.026 EAL: Detected lcore 33 as core 10 on socket 1 00:05:32.026 EAL: Detected lcore 34 as core 11 on socket 1 00:05:32.026 EAL: Detected lcore 35 as core 12 on socket 1 00:05:32.026 EAL: Detected lcore 36 as core 13 on socket 1 00:05:32.026 EAL: Detected lcore 37 as core 16 on socket 1 00:05:32.026 EAL: Detected lcore 38 as core 17 on socket 1 00:05:32.026 EAL: Detected lcore 39 as core 18 on socket 1 00:05:32.026 EAL: Detected lcore 40 as core 19 on socket 1 00:05:32.026 EAL: Detected lcore 41 as core 20 on socket 1 00:05:32.026 EAL: Detected lcore 42 as core 21 on socket 1 00:05:32.026 EAL: Detected lcore 43 as core 25 on socket 1 00:05:32.026 EAL: Detected lcore 44 as core 26 on socket 1 00:05:32.026 EAL: Detected lcore 45 as core 27 on socket 1 00:05:32.026 EAL: Detected lcore 46 as core 28 on socket 1 00:05:32.026 EAL: Detected lcore 47 as core 29 on socket 1 00:05:32.026 EAL: Detected lcore 48 as core 0 on socket 0 00:05:32.026 EAL: Detected lcore 49 as core 1 on socket 0 00:05:32.026 EAL: Detected lcore 50 as core 2 on socket 0 00:05:32.026 EAL: Detected lcore 51 as core 3 on socket 0 00:05:32.026 EAL: Detected lcore 52 as core 4 on socket 0 00:05:32.026 EAL: Detected lcore 53 as core 5 on socket 0 00:05:32.026 EAL: Detected lcore 54 as core 6 on socket 0 00:05:32.026 EAL: Detected lcore 55 as core 8 on socket 0 00:05:32.026 EAL: Detected lcore 56 as core 9 on socket 0 00:05:32.026 EAL: Detected lcore 57 as core 10 on socket 0 00:05:32.026 EAL: Detected lcore 58 as core 11 on socket 0 00:05:32.026 EAL: Detected lcore 59 as core 12 on socket 0 00:05:32.026 EAL: Detected lcore 60 as core 13 on socket 0 00:05:32.026 EAL: Detected lcore 61 as core 16 on socket 0 00:05:32.026 EAL: Detected lcore 62 as core 17 on socket 0 00:05:32.026 EAL: Detected lcore 63 as core 18 on socket 0 00:05:32.026 EAL: Detected lcore 64 as core 19 on socket 0 00:05:32.026 EAL: Detected lcore 65 as core 20 on socket 0 00:05:32.026 EAL: Detected lcore 66 as core 21 on socket 0 00:05:32.026 EAL: Detected lcore 67 as core 25 on socket 0 00:05:32.026 EAL: Detected lcore 68 as core 26 on socket 0 00:05:32.026 EAL: Detected lcore 69 as core 27 on socket 0 00:05:32.026 EAL: Detected lcore 70 as core 28 on socket 0 00:05:32.026 EAL: Detected lcore 71 as core 29 on socket 0 00:05:32.026 EAL: Detected lcore 72 as core 0 on socket 1 00:05:32.026 EAL: Detected lcore 73 as core 1 on socket 1 00:05:32.026 EAL: Detected lcore 74 as core 2 on socket 1 00:05:32.026 EAL: Detected lcore 75 as core 3 on socket 1 00:05:32.026 EAL: Detected lcore 76 as core 4 on socket 1 00:05:32.027 EAL: Detected lcore 77 as core 5 on socket 1 00:05:32.027 EAL: Detected lcore 78 as core 6 on socket 1 00:05:32.027 EAL: Detected lcore 79 as core 8 on socket 1 00:05:32.027 EAL: Detected lcore 80 as core 9 on socket 1 00:05:32.027 EAL: Detected lcore 81 as core 10 on socket 1 00:05:32.027 EAL: Detected lcore 82 as core 11 on socket 1 00:05:32.027 EAL: Detected lcore 83 as core 12 on socket 1 00:05:32.027 EAL: Detected lcore 84 as core 13 on socket 1 00:05:32.027 EAL: Detected lcore 85 as core 16 on socket 1 00:05:32.027 EAL: Detected lcore 86 as core 17 on socket 1 00:05:32.027 EAL: Detected lcore 87 as core 18 on socket 1 00:05:32.027 EAL: Detected lcore 88 as core 19 on socket 1 00:05:32.027 EAL: Detected lcore 89 as core 20 on socket 1 00:05:32.027 EAL: Detected lcore 90 as core 21 on socket 1 00:05:32.027 EAL: Detected lcore 91 as core 25 on socket 1 00:05:32.027 EAL: Detected lcore 92 as core 26 on socket 1 00:05:32.027 EAL: Detected lcore 93 as core 27 on socket 1 00:05:32.027 EAL: Detected lcore 94 as core 28 on socket 1 00:05:32.027 EAL: Detected lcore 95 as core 29 on socket 1 00:05:32.027 EAL: Maximum logical cores by configuration: 128 00:05:32.027 EAL: Detected CPU lcores: 96 00:05:32.027 EAL: Detected NUMA nodes: 2 00:05:32.027 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:32.027 EAL: Detected shared linkage of DPDK 00:05:32.027 EAL: No shared files mode enabled, IPC will be disabled 00:05:32.027 EAL: Bus pci wants IOVA as 'DC' 00:05:32.027 EAL: Buses did not request a specific IOVA mode. 00:05:32.027 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:32.027 EAL: Selected IOVA mode 'VA' 00:05:32.027 EAL: Probing VFIO support... 00:05:32.027 EAL: IOMMU type 1 (Type 1) is supported 00:05:32.027 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:32.027 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:32.027 EAL: VFIO support initialized 00:05:32.027 EAL: Ask a virtual area of 0x2e000 bytes 00:05:32.027 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:32.027 EAL: Setting up physically contiguous memory... 00:05:32.027 EAL: Setting maximum number of open files to 524288 00:05:32.027 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:32.027 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:32.027 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:32.027 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.027 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:32.027 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.027 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.027 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:32.027 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:32.027 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.027 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:32.027 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.027 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.027 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:32.027 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:32.027 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.027 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:32.027 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.027 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.027 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:32.027 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:32.027 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.027 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:32.027 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.027 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.027 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:32.027 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:32.027 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:32.027 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.027 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:32.027 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:32.027 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.027 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:32.027 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:32.027 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.027 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:32.027 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:32.027 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.027 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:32.027 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:32.027 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.027 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:32.027 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:32.027 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.027 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:32.027 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:32.027 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.027 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:32.027 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:32.027 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.027 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:32.027 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:32.027 EAL: Hugepages will be freed exactly as allocated. 00:05:32.027 EAL: No shared files mode enabled, IPC is disabled 00:05:32.027 EAL: No shared files mode enabled, IPC is disabled 00:05:32.027 EAL: TSC frequency is ~2300000 KHz 00:05:32.027 EAL: Main lcore 0 is ready (tid=7f1e47b12a00;cpuset=[0]) 00:05:32.027 EAL: Trying to obtain current memory policy. 00:05:32.027 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.027 EAL: Restoring previous memory policy: 0 00:05:32.027 EAL: request: mp_malloc_sync 00:05:32.027 EAL: No shared files mode enabled, IPC is disabled 00:05:32.027 EAL: Heap on socket 0 was expanded by 2MB 00:05:32.027 EAL: No shared files mode enabled, IPC is disabled 00:05:32.286 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:32.286 EAL: Mem event callback 'spdk:(nil)' registered 00:05:32.286 00:05:32.286 00:05:32.286 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.286 http://cunit.sourceforge.net/ 00:05:32.286 00:05:32.286 00:05:32.286 Suite: components_suite 00:05:32.286 Test: vtophys_malloc_test ...passed 00:05:32.286 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:32.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.286 EAL: Restoring previous memory policy: 4 00:05:32.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.286 EAL: request: mp_malloc_sync 00:05:32.286 EAL: No shared files mode enabled, IPC is disabled 00:05:32.286 EAL: Heap on socket 0 was expanded by 4MB 00:05:32.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.286 EAL: request: mp_malloc_sync 00:05:32.286 EAL: No shared files mode enabled, IPC is disabled 00:05:32.286 EAL: Heap on socket 0 was shrunk by 4MB 00:05:32.286 EAL: Trying to obtain current memory policy. 00:05:32.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.286 EAL: Restoring previous memory policy: 4 00:05:32.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.286 EAL: request: mp_malloc_sync 00:05:32.286 EAL: No shared files mode enabled, IPC is disabled 00:05:32.286 EAL: Heap on socket 0 was expanded by 6MB 00:05:32.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.286 EAL: request: mp_malloc_sync 00:05:32.286 EAL: No shared files mode enabled, IPC is disabled 00:05:32.286 EAL: Heap on socket 0 was shrunk by 6MB 00:05:32.286 EAL: Trying to obtain current memory policy. 00:05:32.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.286 EAL: Restoring previous memory policy: 4 00:05:32.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.286 EAL: request: mp_malloc_sync 00:05:32.286 EAL: No shared files mode enabled, IPC is disabled 00:05:32.286 EAL: Heap on socket 0 was expanded by 10MB 00:05:32.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.286 EAL: request: mp_malloc_sync 00:05:32.286 EAL: No shared files mode enabled, IPC is disabled 00:05:32.286 EAL: Heap on socket 0 was shrunk by 10MB 00:05:32.286 EAL: Trying to obtain current memory policy. 00:05:32.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.286 EAL: Restoring previous memory policy: 4 00:05:32.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.286 EAL: request: mp_malloc_sync 00:05:32.286 EAL: No shared files mode enabled, IPC is disabled 00:05:32.286 EAL: Heap on socket 0 was expanded by 18MB 00:05:32.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.286 EAL: request: mp_malloc_sync 00:05:32.286 EAL: No shared files mode enabled, IPC is disabled 00:05:32.286 EAL: Heap on socket 0 was shrunk by 18MB 00:05:32.286 EAL: Trying to obtain current memory policy. 00:05:32.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.286 EAL: Restoring previous memory policy: 4 00:05:32.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.286 EAL: request: mp_malloc_sync 00:05:32.286 EAL: No shared files mode enabled, IPC is disabled 00:05:32.286 EAL: Heap on socket 0 was expanded by 34MB 00:05:32.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.286 EAL: request: mp_malloc_sync 00:05:32.286 EAL: No shared files mode enabled, IPC is disabled 00:05:32.286 EAL: Heap on socket 0 was shrunk by 34MB 00:05:32.286 EAL: Trying to obtain current memory policy. 00:05:32.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.286 EAL: Restoring previous memory policy: 4 00:05:32.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.286 EAL: request: mp_malloc_sync 00:05:32.286 EAL: No shared files mode enabled, IPC is disabled 00:05:32.286 EAL: Heap on socket 0 was expanded by 66MB 00:05:32.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.286 EAL: request: mp_malloc_sync 00:05:32.286 EAL: No shared files mode enabled, IPC is disabled 00:05:32.286 EAL: Heap on socket 0 was shrunk by 66MB 00:05:32.286 EAL: Trying to obtain current memory policy. 00:05:32.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.286 EAL: Restoring previous memory policy: 4 00:05:32.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.286 EAL: request: mp_malloc_sync 00:05:32.286 EAL: No shared files mode enabled, IPC is disabled 00:05:32.286 EAL: Heap on socket 0 was expanded by 130MB 00:05:32.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.286 EAL: request: mp_malloc_sync 00:05:32.286 EAL: No shared files mode enabled, IPC is disabled 00:05:32.286 EAL: Heap on socket 0 was shrunk by 130MB 00:05:32.286 EAL: Trying to obtain current memory policy. 00:05:32.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.286 EAL: Restoring previous memory policy: 4 00:05:32.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.286 EAL: request: mp_malloc_sync 00:05:32.286 EAL: No shared files mode enabled, IPC is disabled 00:05:32.286 EAL: Heap on socket 0 was expanded by 258MB 00:05:32.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.286 EAL: request: mp_malloc_sync 00:05:32.286 EAL: No shared files mode enabled, IPC is disabled 00:05:32.286 EAL: Heap on socket 0 was shrunk by 258MB 00:05:32.286 EAL: Trying to obtain current memory policy. 00:05:32.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.545 EAL: Restoring previous memory policy: 4 00:05:32.545 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.545 EAL: request: mp_malloc_sync 00:05:32.545 EAL: No shared files mode enabled, IPC is disabled 00:05:32.545 EAL: Heap on socket 0 was expanded by 514MB 00:05:32.545 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.545 EAL: request: mp_malloc_sync 00:05:32.545 EAL: No shared files mode enabled, IPC is disabled 00:05:32.545 EAL: Heap on socket 0 was shrunk by 514MB 00:05:32.545 EAL: Trying to obtain current memory policy. 00:05:32.545 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.803 EAL: Restoring previous memory policy: 4 00:05:32.803 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.803 EAL: request: mp_malloc_sync 00:05:32.803 EAL: No shared files mode enabled, IPC is disabled 00:05:32.803 EAL: Heap on socket 0 was expanded by 1026MB 00:05:33.061 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.061 EAL: request: mp_malloc_sync 00:05:33.061 EAL: No shared files mode enabled, IPC is disabled 00:05:33.061 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:33.061 passed 00:05:33.061 00:05:33.061 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.061 suites 1 1 n/a 0 0 00:05:33.061 tests 2 2 2 0 0 00:05:33.061 asserts 497 497 497 0 n/a 00:05:33.061 00:05:33.061 Elapsed time = 0.964 seconds 00:05:33.061 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.061 EAL: request: mp_malloc_sync 00:05:33.061 EAL: No shared files mode enabled, IPC is disabled 00:05:33.061 EAL: Heap on socket 0 was shrunk by 2MB 00:05:33.061 EAL: No shared files mode enabled, IPC is disabled 00:05:33.061 EAL: No shared files mode enabled, IPC is disabled 00:05:33.061 EAL: No shared files mode enabled, IPC is disabled 00:05:33.061 00:05:33.061 real 0m1.080s 00:05:33.061 user 0m0.632s 00:05:33.061 sys 0m0.425s 00:05:33.061 10:47:21 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:33.061 10:47:21 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:33.061 ************************************ 00:05:33.061 END TEST env_vtophys 00:05:33.061 ************************************ 00:05:33.320 10:47:21 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:33.320 10:47:21 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:33.320 10:47:21 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:33.320 10:47:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.320 ************************************ 00:05:33.320 START TEST env_pci 00:05:33.320 ************************************ 00:05:33.320 10:47:21 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:33.320 00:05:33.320 00:05:33.320 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.320 http://cunit.sourceforge.net/ 00:05:33.320 00:05:33.320 00:05:33.320 Suite: pci 00:05:33.320 Test: pci_hook ...[2024-11-15 10:47:22.010409] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1267625 has claimed it 00:05:33.320 EAL: Cannot find device (10000:00:01.0) 00:05:33.320 EAL: Failed to attach device on primary process 00:05:33.320 passed 00:05:33.320 00:05:33.320 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.320 suites 1 1 n/a 0 0 00:05:33.320 tests 1 1 1 0 0 00:05:33.320 asserts 25 25 25 0 n/a 00:05:33.320 00:05:33.320 Elapsed time = 0.026 seconds 00:05:33.320 00:05:33.320 real 0m0.043s 00:05:33.320 user 0m0.011s 00:05:33.320 sys 0m0.032s 00:05:33.320 10:47:22 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:33.320 10:47:22 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:33.320 ************************************ 00:05:33.320 END TEST env_pci 00:05:33.320 ************************************ 00:05:33.320 10:47:22 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:33.320 10:47:22 env -- env/env.sh@15 -- # uname 00:05:33.320 10:47:22 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:33.320 10:47:22 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:33.320 10:47:22 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:33.320 10:47:22 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:33.320 10:47:22 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:33.320 10:47:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.320 ************************************ 00:05:33.320 START TEST env_dpdk_post_init 00:05:33.320 ************************************ 00:05:33.320 10:47:22 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:33.320 EAL: Detected CPU lcores: 96 00:05:33.320 EAL: Detected NUMA nodes: 2 00:05:33.320 EAL: Detected shared linkage of DPDK 00:05:33.320 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:33.320 EAL: Selected IOVA mode 'VA' 00:05:33.320 EAL: VFIO support initialized 00:05:33.320 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:33.579 EAL: Using IOMMU type 1 (Type 1) 00:05:33.579 EAL: Ignore mapping IO port bar(1) 00:05:33.579 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:33.579 EAL: Ignore mapping IO port bar(1) 00:05:33.579 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:33.579 EAL: Ignore mapping IO port bar(1) 00:05:33.579 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:33.579 EAL: Ignore mapping IO port bar(1) 00:05:33.579 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:33.579 EAL: Ignore mapping IO port bar(1) 00:05:33.579 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:33.579 EAL: Ignore mapping IO port bar(1) 00:05:33.579 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:33.579 EAL: Ignore mapping IO port bar(1) 00:05:33.579 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:33.579 EAL: Ignore mapping IO port bar(1) 00:05:33.579 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:34.514 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:34.514 EAL: Ignore mapping IO port bar(1) 00:05:34.514 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:34.514 EAL: Ignore mapping IO port bar(1) 00:05:34.514 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:34.514 EAL: Ignore mapping IO port bar(1) 00:05:34.514 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:34.514 EAL: Ignore mapping IO port bar(1) 00:05:34.514 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:34.514 EAL: Ignore mapping IO port bar(1) 00:05:34.514 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:34.514 EAL: Ignore mapping IO port bar(1) 00:05:34.514 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:34.514 EAL: Ignore mapping IO port bar(1) 00:05:34.514 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:34.514 EAL: Ignore mapping IO port bar(1) 00:05:34.514 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:37.795 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:37.795 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:37.795 Starting DPDK initialization... 00:05:37.795 Starting SPDK post initialization... 00:05:37.795 SPDK NVMe probe 00:05:37.795 Attaching to 0000:5e:00.0 00:05:37.795 Attached to 0000:5e:00.0 00:05:37.795 Cleaning up... 00:05:37.795 00:05:37.795 real 0m4.372s 00:05:37.795 user 0m2.990s 00:05:37.795 sys 0m0.450s 00:05:37.795 10:47:26 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:37.795 10:47:26 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:37.795 ************************************ 00:05:37.795 END TEST env_dpdk_post_init 00:05:37.795 ************************************ 00:05:37.795 10:47:26 env -- env/env.sh@26 -- # uname 00:05:37.795 10:47:26 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:37.795 10:47:26 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:37.795 10:47:26 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:37.795 10:47:26 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:37.795 10:47:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.795 ************************************ 00:05:37.795 START TEST env_mem_callbacks 00:05:37.795 ************************************ 00:05:37.795 10:47:26 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:37.795 EAL: Detected CPU lcores: 96 00:05:37.795 EAL: Detected NUMA nodes: 2 00:05:37.795 EAL: Detected shared linkage of DPDK 00:05:37.795 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:37.795 EAL: Selected IOVA mode 'VA' 00:05:37.795 EAL: VFIO support initialized 00:05:37.795 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:37.795 00:05:37.795 00:05:37.795 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.795 http://cunit.sourceforge.net/ 00:05:37.795 00:05:37.795 00:05:37.795 Suite: memory 00:05:37.795 Test: test ... 00:05:37.795 register 0x200000200000 2097152 00:05:37.795 malloc 3145728 00:05:37.795 register 0x200000400000 4194304 00:05:37.795 buf 0x200000500000 len 3145728 PASSED 00:05:37.795 malloc 64 00:05:37.795 buf 0x2000004fff40 len 64 PASSED 00:05:37.795 malloc 4194304 00:05:37.796 register 0x200000800000 6291456 00:05:37.796 buf 0x200000a00000 len 4194304 PASSED 00:05:37.796 free 0x200000500000 3145728 00:05:37.796 free 0x2000004fff40 64 00:05:37.796 unregister 0x200000400000 4194304 PASSED 00:05:37.796 free 0x200000a00000 4194304 00:05:37.796 unregister 0x200000800000 6291456 PASSED 00:05:37.796 malloc 8388608 00:05:37.796 register 0x200000400000 10485760 00:05:37.796 buf 0x200000600000 len 8388608 PASSED 00:05:37.796 free 0x200000600000 8388608 00:05:37.796 unregister 0x200000400000 10485760 PASSED 00:05:37.796 passed 00:05:37.796 00:05:37.796 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.796 suites 1 1 n/a 0 0 00:05:37.796 tests 1 1 1 0 0 00:05:37.796 asserts 15 15 15 0 n/a 00:05:37.796 00:05:37.796 Elapsed time = 0.005 seconds 00:05:37.796 00:05:37.796 real 0m0.055s 00:05:37.796 user 0m0.018s 00:05:37.796 sys 0m0.037s 00:05:37.796 10:47:26 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:37.796 10:47:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:37.796 ************************************ 00:05:37.796 END TEST env_mem_callbacks 00:05:37.796 ************************************ 00:05:37.796 00:05:37.796 real 0m6.203s 00:05:37.796 user 0m4.032s 00:05:37.796 sys 0m1.249s 00:05:37.796 10:47:26 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:37.796 10:47:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.796 ************************************ 00:05:37.796 END TEST env 00:05:37.796 ************************************ 00:05:37.796 10:47:26 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:37.796 10:47:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:37.796 10:47:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:37.796 10:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:38.055 ************************************ 00:05:38.055 START TEST rpc 00:05:38.055 ************************************ 00:05:38.055 10:47:26 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:38.055 * Looking for test storage... 00:05:38.055 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:38.055 10:47:26 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:38.055 10:47:26 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:38.055 10:47:26 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:38.055 10:47:26 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:38.055 10:47:26 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.055 10:47:26 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.055 10:47:26 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.055 10:47:26 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.055 10:47:26 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.055 10:47:26 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.055 10:47:26 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.055 10:47:26 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.055 10:47:26 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.055 10:47:26 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.055 10:47:26 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.055 10:47:26 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:38.055 10:47:26 rpc -- scripts/common.sh@345 -- # : 1 00:05:38.055 10:47:26 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.055 10:47:26 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.055 10:47:26 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:38.055 10:47:26 rpc -- scripts/common.sh@353 -- # local d=1 00:05:38.055 10:47:26 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.055 10:47:26 rpc -- scripts/common.sh@355 -- # echo 1 00:05:38.055 10:47:26 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.055 10:47:26 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:38.055 10:47:26 rpc -- scripts/common.sh@353 -- # local d=2 00:05:38.055 10:47:26 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.055 10:47:26 rpc -- scripts/common.sh@355 -- # echo 2 00:05:38.055 10:47:26 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.055 10:47:26 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.055 10:47:26 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.055 10:47:26 rpc -- scripts/common.sh@368 -- # return 0 00:05:38.055 10:47:26 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.055 10:47:26 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:38.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.055 --rc genhtml_branch_coverage=1 00:05:38.055 --rc genhtml_function_coverage=1 00:05:38.055 --rc genhtml_legend=1 00:05:38.055 --rc geninfo_all_blocks=1 00:05:38.055 --rc geninfo_unexecuted_blocks=1 00:05:38.055 00:05:38.055 ' 00:05:38.055 10:47:26 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:38.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.055 --rc genhtml_branch_coverage=1 00:05:38.055 --rc genhtml_function_coverage=1 00:05:38.055 --rc genhtml_legend=1 00:05:38.055 --rc geninfo_all_blocks=1 00:05:38.055 --rc geninfo_unexecuted_blocks=1 00:05:38.055 00:05:38.055 ' 00:05:38.055 10:47:26 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:38.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.055 --rc genhtml_branch_coverage=1 00:05:38.055 --rc genhtml_function_coverage=1 00:05:38.055 --rc genhtml_legend=1 00:05:38.055 --rc geninfo_all_blocks=1 00:05:38.055 --rc geninfo_unexecuted_blocks=1 00:05:38.055 00:05:38.055 ' 00:05:38.055 10:47:26 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:38.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.055 --rc genhtml_branch_coverage=1 00:05:38.055 --rc genhtml_function_coverage=1 00:05:38.055 --rc genhtml_legend=1 00:05:38.055 --rc geninfo_all_blocks=1 00:05:38.055 --rc geninfo_unexecuted_blocks=1 00:05:38.055 00:05:38.055 ' 00:05:38.055 10:47:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1268587 00:05:38.055 10:47:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.055 10:47:26 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:38.055 10:47:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1268587 00:05:38.055 10:47:26 rpc -- common/autotest_common.sh@833 -- # '[' -z 1268587 ']' 00:05:38.055 10:47:26 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.055 10:47:26 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:38.055 10:47:26 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.055 10:47:26 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:38.055 10:47:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.055 [2024-11-15 10:47:26.924521] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:05:38.055 [2024-11-15 10:47:26.924567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1268587 ] 00:05:38.314 [2024-11-15 10:47:26.984486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.314 [2024-11-15 10:47:27.023428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:38.314 [2024-11-15 10:47:27.023464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1268587' to capture a snapshot of events at runtime. 00:05:38.314 [2024-11-15 10:47:27.023472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:38.314 [2024-11-15 10:47:27.023478] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:38.314 [2024-11-15 10:47:27.023484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1268587 for offline analysis/debug. 00:05:38.314 [2024-11-15 10:47:27.024086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.573 10:47:27 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:38.573 10:47:27 rpc -- common/autotest_common.sh@866 -- # return 0 00:05:38.573 10:47:27 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:38.573 10:47:27 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:38.573 10:47:27 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:38.573 10:47:27 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:38.573 10:47:27 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:38.573 10:47:27 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:38.573 10:47:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.573 ************************************ 00:05:38.573 START TEST rpc_integrity 00:05:38.573 ************************************ 00:05:38.573 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:38.573 10:47:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:38.573 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.573 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.573 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.573 10:47:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:38.573 10:47:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:38.573 10:47:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:38.573 10:47:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:38.573 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.573 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.573 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.573 10:47:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:38.573 10:47:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:38.573 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.573 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.573 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.573 10:47:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:38.573 { 00:05:38.573 "name": "Malloc0", 00:05:38.573 "aliases": [ 00:05:38.573 "618d7a4a-0d2f-47cf-b2f4-1ba2a17343f4" 00:05:38.573 ], 00:05:38.573 "product_name": "Malloc disk", 00:05:38.573 "block_size": 512, 00:05:38.573 "num_blocks": 16384, 00:05:38.573 "uuid": "618d7a4a-0d2f-47cf-b2f4-1ba2a17343f4", 00:05:38.573 "assigned_rate_limits": { 00:05:38.573 "rw_ios_per_sec": 0, 00:05:38.573 "rw_mbytes_per_sec": 0, 00:05:38.573 "r_mbytes_per_sec": 0, 00:05:38.573 "w_mbytes_per_sec": 0 00:05:38.573 }, 00:05:38.573 "claimed": false, 00:05:38.573 "zoned": false, 00:05:38.573 "supported_io_types": { 00:05:38.573 "read": true, 00:05:38.573 "write": true, 00:05:38.573 "unmap": true, 00:05:38.573 "flush": true, 00:05:38.573 "reset": true, 00:05:38.573 "nvme_admin": false, 00:05:38.573 "nvme_io": false, 00:05:38.573 "nvme_io_md": false, 00:05:38.573 "write_zeroes": true, 00:05:38.573 "zcopy": true, 00:05:38.573 "get_zone_info": false, 00:05:38.573 "zone_management": false, 00:05:38.573 "zone_append": false, 00:05:38.573 "compare": false, 00:05:38.573 "compare_and_write": false, 00:05:38.573 "abort": true, 00:05:38.573 "seek_hole": false, 00:05:38.573 "seek_data": false, 00:05:38.573 "copy": true, 00:05:38.573 "nvme_iov_md": false 00:05:38.573 }, 00:05:38.573 "memory_domains": [ 00:05:38.573 { 00:05:38.573 "dma_device_id": "system", 00:05:38.573 "dma_device_type": 1 00:05:38.573 }, 00:05:38.573 { 00:05:38.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.573 "dma_device_type": 2 00:05:38.573 } 00:05:38.573 ], 00:05:38.573 "driver_specific": {} 00:05:38.573 } 00:05:38.573 ]' 00:05:38.573 10:47:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:38.573 10:47:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:38.573 10:47:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:38.573 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.573 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.573 [2024-11-15 10:47:27.384507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:38.573 [2024-11-15 10:47:27.384538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:38.573 [2024-11-15 10:47:27.384551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9a97b0 00:05:38.573 [2024-11-15 10:47:27.384558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:38.573 [2024-11-15 10:47:27.385668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:38.573 [2024-11-15 10:47:27.385690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:38.573 Passthru0 00:05:38.573 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.573 10:47:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:38.573 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.573 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.573 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.573 10:47:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:38.573 { 00:05:38.574 "name": "Malloc0", 00:05:38.574 "aliases": [ 00:05:38.574 "618d7a4a-0d2f-47cf-b2f4-1ba2a17343f4" 00:05:38.574 ], 00:05:38.574 "product_name": "Malloc disk", 00:05:38.574 "block_size": 512, 00:05:38.574 "num_blocks": 16384, 00:05:38.574 "uuid": "618d7a4a-0d2f-47cf-b2f4-1ba2a17343f4", 00:05:38.574 "assigned_rate_limits": { 00:05:38.574 "rw_ios_per_sec": 0, 00:05:38.574 "rw_mbytes_per_sec": 0, 00:05:38.574 "r_mbytes_per_sec": 0, 00:05:38.574 "w_mbytes_per_sec": 0 00:05:38.574 }, 00:05:38.574 "claimed": true, 00:05:38.574 "claim_type": "exclusive_write", 00:05:38.574 "zoned": false, 00:05:38.574 "supported_io_types": { 00:05:38.574 "read": true, 00:05:38.574 "write": true, 00:05:38.574 "unmap": true, 00:05:38.574 "flush": true, 00:05:38.574 "reset": true, 00:05:38.574 "nvme_admin": false, 00:05:38.574 "nvme_io": false, 00:05:38.574 "nvme_io_md": false, 00:05:38.574 "write_zeroes": true, 00:05:38.574 "zcopy": true, 00:05:38.574 "get_zone_info": false, 00:05:38.574 "zone_management": false, 00:05:38.574 "zone_append": false, 00:05:38.574 "compare": false, 00:05:38.574 "compare_and_write": false, 00:05:38.574 "abort": true, 00:05:38.574 "seek_hole": false, 00:05:38.574 "seek_data": false, 00:05:38.574 "copy": true, 00:05:38.574 "nvme_iov_md": false 00:05:38.574 }, 00:05:38.574 "memory_domains": [ 00:05:38.574 { 00:05:38.574 "dma_device_id": "system", 00:05:38.574 "dma_device_type": 1 00:05:38.574 }, 00:05:38.574 { 00:05:38.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.574 "dma_device_type": 2 00:05:38.574 } 00:05:38.574 ], 00:05:38.574 "driver_specific": {} 00:05:38.574 }, 00:05:38.574 { 00:05:38.574 "name": "Passthru0", 00:05:38.574 "aliases": [ 00:05:38.574 "3b0bb656-01c7-548b-a808-f190e42e8934" 00:05:38.574 ], 00:05:38.574 "product_name": "passthru", 00:05:38.574 "block_size": 512, 00:05:38.574 "num_blocks": 16384, 00:05:38.574 "uuid": "3b0bb656-01c7-548b-a808-f190e42e8934", 00:05:38.574 "assigned_rate_limits": { 00:05:38.574 "rw_ios_per_sec": 0, 00:05:38.574 "rw_mbytes_per_sec": 0, 00:05:38.574 "r_mbytes_per_sec": 0, 00:05:38.574 "w_mbytes_per_sec": 0 00:05:38.574 }, 00:05:38.574 "claimed": false, 00:05:38.574 "zoned": false, 00:05:38.574 "supported_io_types": { 00:05:38.574 "read": true, 00:05:38.574 "write": true, 00:05:38.574 "unmap": true, 00:05:38.574 "flush": true, 00:05:38.574 "reset": true, 00:05:38.574 "nvme_admin": false, 00:05:38.574 "nvme_io": false, 00:05:38.574 "nvme_io_md": false, 00:05:38.574 "write_zeroes": true, 00:05:38.574 "zcopy": true, 00:05:38.574 "get_zone_info": false, 00:05:38.574 "zone_management": false, 00:05:38.574 "zone_append": false, 00:05:38.574 "compare": false, 00:05:38.574 "compare_and_write": false, 00:05:38.574 "abort": true, 00:05:38.574 "seek_hole": false, 00:05:38.574 "seek_data": false, 00:05:38.574 "copy": true, 00:05:38.574 "nvme_iov_md": false 00:05:38.574 }, 00:05:38.574 "memory_domains": [ 00:05:38.574 { 00:05:38.574 "dma_device_id": "system", 00:05:38.574 "dma_device_type": 1 00:05:38.574 }, 00:05:38.574 { 00:05:38.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.574 "dma_device_type": 2 00:05:38.574 } 00:05:38.574 ], 00:05:38.574 "driver_specific": { 00:05:38.574 "passthru": { 00:05:38.574 "name": "Passthru0", 00:05:38.574 "base_bdev_name": "Malloc0" 00:05:38.574 } 00:05:38.574 } 00:05:38.574 } 00:05:38.574 ]' 00:05:38.574 10:47:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:38.832 10:47:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:38.832 10:47:27 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:38.832 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.832 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.832 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.832 10:47:27 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:38.832 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.832 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.832 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.832 10:47:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:38.832 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.832 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.832 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.832 10:47:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:38.832 10:47:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:38.833 10:47:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:38.833 00:05:38.833 real 0m0.254s 00:05:38.833 user 0m0.151s 00:05:38.833 sys 0m0.037s 00:05:38.833 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:38.833 10:47:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.833 ************************************ 00:05:38.833 END TEST rpc_integrity 00:05:38.833 ************************************ 00:05:38.833 10:47:27 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:38.833 10:47:27 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:38.833 10:47:27 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:38.833 10:47:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.833 ************************************ 00:05:38.833 START TEST rpc_plugins 00:05:38.833 ************************************ 00:05:38.833 10:47:27 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:05:38.833 10:47:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:38.833 10:47:27 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.833 10:47:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.833 10:47:27 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.833 10:47:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:38.833 10:47:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:38.833 10:47:27 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.833 10:47:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.833 10:47:27 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.833 10:47:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:38.833 { 00:05:38.833 "name": "Malloc1", 00:05:38.833 "aliases": [ 00:05:38.833 "d38f0cc9-814b-4b8b-bbcf-28d17668f8ba" 00:05:38.833 ], 00:05:38.833 "product_name": "Malloc disk", 00:05:38.833 "block_size": 4096, 00:05:38.833 "num_blocks": 256, 00:05:38.833 "uuid": "d38f0cc9-814b-4b8b-bbcf-28d17668f8ba", 00:05:38.833 "assigned_rate_limits": { 00:05:38.833 "rw_ios_per_sec": 0, 00:05:38.833 "rw_mbytes_per_sec": 0, 00:05:38.833 "r_mbytes_per_sec": 0, 00:05:38.833 "w_mbytes_per_sec": 0 00:05:38.833 }, 00:05:38.833 "claimed": false, 00:05:38.833 "zoned": false, 00:05:38.833 "supported_io_types": { 00:05:38.833 "read": true, 00:05:38.833 "write": true, 00:05:38.833 "unmap": true, 00:05:38.833 "flush": true, 00:05:38.833 "reset": true, 00:05:38.833 "nvme_admin": false, 00:05:38.833 "nvme_io": false, 00:05:38.833 "nvme_io_md": false, 00:05:38.833 "write_zeroes": true, 00:05:38.833 "zcopy": true, 00:05:38.833 "get_zone_info": false, 00:05:38.833 "zone_management": false, 00:05:38.833 "zone_append": false, 00:05:38.833 "compare": false, 00:05:38.833 "compare_and_write": false, 00:05:38.833 "abort": true, 00:05:38.833 "seek_hole": false, 00:05:38.833 "seek_data": false, 00:05:38.833 "copy": true, 00:05:38.833 "nvme_iov_md": false 00:05:38.833 }, 00:05:38.833 "memory_domains": [ 00:05:38.833 { 00:05:38.833 "dma_device_id": "system", 00:05:38.833 "dma_device_type": 1 00:05:38.833 }, 00:05:38.833 { 00:05:38.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.833 "dma_device_type": 2 00:05:38.833 } 00:05:38.833 ], 00:05:38.833 "driver_specific": {} 00:05:38.833 } 00:05:38.833 ]' 00:05:38.833 10:47:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:38.833 10:47:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:38.833 10:47:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:38.833 10:47:27 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.833 10:47:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.833 10:47:27 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.833 10:47:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:38.833 10:47:27 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.833 10:47:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.833 10:47:27 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.833 10:47:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:38.833 10:47:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:39.091 10:47:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:39.091 00:05:39.091 real 0m0.141s 00:05:39.091 user 0m0.086s 00:05:39.091 sys 0m0.016s 00:05:39.091 10:47:27 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:39.091 10:47:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.091 ************************************ 00:05:39.091 END TEST rpc_plugins 00:05:39.091 ************************************ 00:05:39.091 10:47:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:39.091 10:47:27 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:39.091 10:47:27 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:39.091 10:47:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.091 ************************************ 00:05:39.091 START TEST rpc_trace_cmd_test 00:05:39.091 ************************************ 00:05:39.091 10:47:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:05:39.091 10:47:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:39.091 10:47:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:39.091 10:47:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.091 10:47:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.091 10:47:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.091 10:47:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:39.091 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1268587", 00:05:39.091 "tpoint_group_mask": "0x8", 00:05:39.091 "iscsi_conn": { 00:05:39.091 "mask": "0x2", 00:05:39.091 "tpoint_mask": "0x0" 00:05:39.091 }, 00:05:39.091 "scsi": { 00:05:39.091 "mask": "0x4", 00:05:39.091 "tpoint_mask": "0x0" 00:05:39.091 }, 00:05:39.091 "bdev": { 00:05:39.091 "mask": "0x8", 00:05:39.091 "tpoint_mask": "0xffffffffffffffff" 00:05:39.091 }, 00:05:39.091 "nvmf_rdma": { 00:05:39.091 "mask": "0x10", 00:05:39.091 "tpoint_mask": "0x0" 00:05:39.091 }, 00:05:39.091 "nvmf_tcp": { 00:05:39.091 "mask": "0x20", 00:05:39.091 "tpoint_mask": "0x0" 00:05:39.091 }, 00:05:39.091 "ftl": { 00:05:39.091 "mask": "0x40", 00:05:39.091 "tpoint_mask": "0x0" 00:05:39.091 }, 00:05:39.091 "blobfs": { 00:05:39.091 "mask": "0x80", 00:05:39.091 "tpoint_mask": "0x0" 00:05:39.091 }, 00:05:39.091 "dsa": { 00:05:39.091 "mask": "0x200", 00:05:39.091 "tpoint_mask": "0x0" 00:05:39.091 }, 00:05:39.091 "thread": { 00:05:39.091 "mask": "0x400", 00:05:39.091 "tpoint_mask": "0x0" 00:05:39.091 }, 00:05:39.091 "nvme_pcie": { 00:05:39.091 "mask": "0x800", 00:05:39.091 "tpoint_mask": "0x0" 00:05:39.091 }, 00:05:39.091 "iaa": { 00:05:39.091 "mask": "0x1000", 00:05:39.091 "tpoint_mask": "0x0" 00:05:39.091 }, 00:05:39.091 "nvme_tcp": { 00:05:39.091 "mask": "0x2000", 00:05:39.091 "tpoint_mask": "0x0" 00:05:39.091 }, 00:05:39.091 "bdev_nvme": { 00:05:39.091 "mask": "0x4000", 00:05:39.091 "tpoint_mask": "0x0" 00:05:39.091 }, 00:05:39.091 "sock": { 00:05:39.091 "mask": "0x8000", 00:05:39.091 "tpoint_mask": "0x0" 00:05:39.091 }, 00:05:39.091 "blob": { 00:05:39.091 "mask": "0x10000", 00:05:39.091 "tpoint_mask": "0x0" 00:05:39.091 }, 00:05:39.091 "bdev_raid": { 00:05:39.091 "mask": "0x20000", 00:05:39.091 "tpoint_mask": "0x0" 00:05:39.091 }, 00:05:39.091 "scheduler": { 00:05:39.091 "mask": "0x40000", 00:05:39.091 "tpoint_mask": "0x0" 00:05:39.091 } 00:05:39.091 }' 00:05:39.091 10:47:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:39.091 10:47:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:39.091 10:47:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:39.091 10:47:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:39.091 10:47:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:39.091 10:47:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:39.091 10:47:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:39.091 10:47:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:39.091 10:47:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:39.349 10:47:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:39.349 00:05:39.349 real 0m0.204s 00:05:39.349 user 0m0.171s 00:05:39.349 sys 0m0.023s 00:05:39.349 10:47:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:39.349 10:47:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.349 ************************************ 00:05:39.349 END TEST rpc_trace_cmd_test 00:05:39.349 ************************************ 00:05:39.349 10:47:28 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:39.349 10:47:28 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:39.349 10:47:28 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:39.349 10:47:28 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:39.349 10:47:28 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:39.349 10:47:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.349 ************************************ 00:05:39.349 START TEST rpc_daemon_integrity 00:05:39.349 ************************************ 00:05:39.349 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:39.349 10:47:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:39.349 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.349 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.349 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.349 10:47:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:39.349 10:47:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:39.349 10:47:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:39.349 10:47:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:39.349 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.349 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.349 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.349 10:47:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:39.349 10:47:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:39.349 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.349 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.349 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.349 10:47:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:39.349 { 00:05:39.349 "name": "Malloc2", 00:05:39.349 "aliases": [ 00:05:39.349 "28c00723-e095-4c75-aa3c-a1a6480dd497" 00:05:39.349 ], 00:05:39.349 "product_name": "Malloc disk", 00:05:39.349 "block_size": 512, 00:05:39.349 "num_blocks": 16384, 00:05:39.349 "uuid": "28c00723-e095-4c75-aa3c-a1a6480dd497", 00:05:39.349 "assigned_rate_limits": { 00:05:39.349 "rw_ios_per_sec": 0, 00:05:39.349 "rw_mbytes_per_sec": 0, 00:05:39.349 "r_mbytes_per_sec": 0, 00:05:39.349 "w_mbytes_per_sec": 0 00:05:39.349 }, 00:05:39.349 "claimed": false, 00:05:39.350 "zoned": false, 00:05:39.350 "supported_io_types": { 00:05:39.350 "read": true, 00:05:39.350 "write": true, 00:05:39.350 "unmap": true, 00:05:39.350 "flush": true, 00:05:39.350 "reset": true, 00:05:39.350 "nvme_admin": false, 00:05:39.350 "nvme_io": false, 00:05:39.350 "nvme_io_md": false, 00:05:39.350 "write_zeroes": true, 00:05:39.350 "zcopy": true, 00:05:39.350 "get_zone_info": false, 00:05:39.350 "zone_management": false, 00:05:39.350 "zone_append": false, 00:05:39.350 "compare": false, 00:05:39.350 "compare_and_write": false, 00:05:39.350 "abort": true, 00:05:39.350 "seek_hole": false, 00:05:39.350 "seek_data": false, 00:05:39.350 "copy": true, 00:05:39.350 "nvme_iov_md": false 00:05:39.350 }, 00:05:39.350 "memory_domains": [ 00:05:39.350 { 00:05:39.350 "dma_device_id": "system", 00:05:39.350 "dma_device_type": 1 00:05:39.350 }, 00:05:39.350 { 00:05:39.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.350 "dma_device_type": 2 00:05:39.350 } 00:05:39.350 ], 00:05:39.350 "driver_specific": {} 00:05:39.350 } 00:05:39.350 ]' 00:05:39.350 10:47:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:39.350 10:47:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:39.350 10:47:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:39.350 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.350 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.350 [2024-11-15 10:47:28.174664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:39.350 [2024-11-15 10:47:28.174694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:39.350 [2024-11-15 10:47:28.174708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9ad250 00:05:39.350 [2024-11-15 10:47:28.174715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:39.350 [2024-11-15 10:47:28.175731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:39.350 [2024-11-15 10:47:28.175751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:39.350 Passthru0 00:05:39.350 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.350 10:47:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:39.350 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.350 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.350 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.350 10:47:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:39.350 { 00:05:39.350 "name": "Malloc2", 00:05:39.350 "aliases": [ 00:05:39.350 "28c00723-e095-4c75-aa3c-a1a6480dd497" 00:05:39.350 ], 00:05:39.350 "product_name": "Malloc disk", 00:05:39.350 "block_size": 512, 00:05:39.350 "num_blocks": 16384, 00:05:39.350 "uuid": "28c00723-e095-4c75-aa3c-a1a6480dd497", 00:05:39.350 "assigned_rate_limits": { 00:05:39.350 "rw_ios_per_sec": 0, 00:05:39.350 "rw_mbytes_per_sec": 0, 00:05:39.350 "r_mbytes_per_sec": 0, 00:05:39.350 "w_mbytes_per_sec": 0 00:05:39.350 }, 00:05:39.350 "claimed": true, 00:05:39.350 "claim_type": "exclusive_write", 00:05:39.350 "zoned": false, 00:05:39.350 "supported_io_types": { 00:05:39.350 "read": true, 00:05:39.350 "write": true, 00:05:39.350 "unmap": true, 00:05:39.350 "flush": true, 00:05:39.350 "reset": true, 00:05:39.350 "nvme_admin": false, 00:05:39.350 "nvme_io": false, 00:05:39.350 "nvme_io_md": false, 00:05:39.350 "write_zeroes": true, 00:05:39.350 "zcopy": true, 00:05:39.350 "get_zone_info": false, 00:05:39.350 "zone_management": false, 00:05:39.350 "zone_append": false, 00:05:39.350 "compare": false, 00:05:39.350 "compare_and_write": false, 00:05:39.350 "abort": true, 00:05:39.350 "seek_hole": false, 00:05:39.350 "seek_data": false, 00:05:39.350 "copy": true, 00:05:39.350 "nvme_iov_md": false 00:05:39.350 }, 00:05:39.350 "memory_domains": [ 00:05:39.350 { 00:05:39.350 "dma_device_id": "system", 00:05:39.350 "dma_device_type": 1 00:05:39.350 }, 00:05:39.350 { 00:05:39.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.350 "dma_device_type": 2 00:05:39.350 } 00:05:39.350 ], 00:05:39.350 "driver_specific": {} 00:05:39.350 }, 00:05:39.350 { 00:05:39.350 "name": "Passthru0", 00:05:39.350 "aliases": [ 00:05:39.350 "40e6225e-6b8e-5c66-b4a7-a794ec2fb965" 00:05:39.350 ], 00:05:39.350 "product_name": "passthru", 00:05:39.350 "block_size": 512, 00:05:39.350 "num_blocks": 16384, 00:05:39.350 "uuid": "40e6225e-6b8e-5c66-b4a7-a794ec2fb965", 00:05:39.350 "assigned_rate_limits": { 00:05:39.350 "rw_ios_per_sec": 0, 00:05:39.350 "rw_mbytes_per_sec": 0, 00:05:39.350 "r_mbytes_per_sec": 0, 00:05:39.350 "w_mbytes_per_sec": 0 00:05:39.350 }, 00:05:39.350 "claimed": false, 00:05:39.350 "zoned": false, 00:05:39.350 "supported_io_types": { 00:05:39.350 "read": true, 00:05:39.350 "write": true, 00:05:39.350 "unmap": true, 00:05:39.350 "flush": true, 00:05:39.350 "reset": true, 00:05:39.350 "nvme_admin": false, 00:05:39.350 "nvme_io": false, 00:05:39.350 "nvme_io_md": false, 00:05:39.350 "write_zeroes": true, 00:05:39.350 "zcopy": true, 00:05:39.350 "get_zone_info": false, 00:05:39.350 "zone_management": false, 00:05:39.350 "zone_append": false, 00:05:39.350 "compare": false, 00:05:39.350 "compare_and_write": false, 00:05:39.350 "abort": true, 00:05:39.350 "seek_hole": false, 00:05:39.350 "seek_data": false, 00:05:39.350 "copy": true, 00:05:39.350 "nvme_iov_md": false 00:05:39.350 }, 00:05:39.350 "memory_domains": [ 00:05:39.350 { 00:05:39.350 "dma_device_id": "system", 00:05:39.350 "dma_device_type": 1 00:05:39.350 }, 00:05:39.350 { 00:05:39.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.350 "dma_device_type": 2 00:05:39.350 } 00:05:39.350 ], 00:05:39.350 "driver_specific": { 00:05:39.350 "passthru": { 00:05:39.350 "name": "Passthru0", 00:05:39.350 "base_bdev_name": "Malloc2" 00:05:39.350 } 00:05:39.350 } 00:05:39.350 } 00:05:39.350 ]' 00:05:39.350 10:47:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:39.610 10:47:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:39.610 10:47:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:39.610 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.610 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.610 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.610 10:47:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:39.610 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.610 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.610 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.610 10:47:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:39.610 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.610 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.610 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.610 10:47:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:39.610 10:47:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:39.610 10:47:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:39.610 00:05:39.610 real 0m0.259s 00:05:39.610 user 0m0.168s 00:05:39.610 sys 0m0.031s 00:05:39.610 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:39.610 10:47:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.610 ************************************ 00:05:39.610 END TEST rpc_daemon_integrity 00:05:39.610 ************************************ 00:05:39.610 10:47:28 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:39.610 10:47:28 rpc -- rpc/rpc.sh@84 -- # killprocess 1268587 00:05:39.610 10:47:28 rpc -- common/autotest_common.sh@952 -- # '[' -z 1268587 ']' 00:05:39.610 10:47:28 rpc -- common/autotest_common.sh@956 -- # kill -0 1268587 00:05:39.610 10:47:28 rpc -- common/autotest_common.sh@957 -- # uname 00:05:39.610 10:47:28 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:39.610 10:47:28 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1268587 00:05:39.610 10:47:28 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:39.610 10:47:28 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:39.610 10:47:28 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1268587' 00:05:39.610 killing process with pid 1268587 00:05:39.610 10:47:28 rpc -- common/autotest_common.sh@971 -- # kill 1268587 00:05:39.610 10:47:28 rpc -- common/autotest_common.sh@976 -- # wait 1268587 00:05:39.869 00:05:39.869 real 0m1.995s 00:05:39.869 user 0m2.529s 00:05:39.869 sys 0m0.665s 00:05:39.869 10:47:28 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:39.869 10:47:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.869 ************************************ 00:05:39.869 END TEST rpc 00:05:39.869 ************************************ 00:05:39.869 10:47:28 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:39.869 10:47:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:39.869 10:47:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:39.869 10:47:28 -- common/autotest_common.sh@10 -- # set +x 00:05:40.128 ************************************ 00:05:40.128 START TEST skip_rpc 00:05:40.128 ************************************ 00:05:40.128 10:47:28 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:40.128 * Looking for test storage... 00:05:40.128 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:40.128 10:47:28 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:40.128 10:47:28 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:40.128 10:47:28 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:40.128 10:47:28 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.128 10:47:28 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:40.128 10:47:28 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.128 10:47:28 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:40.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.128 --rc genhtml_branch_coverage=1 00:05:40.128 --rc genhtml_function_coverage=1 00:05:40.128 --rc genhtml_legend=1 00:05:40.128 --rc geninfo_all_blocks=1 00:05:40.128 --rc geninfo_unexecuted_blocks=1 00:05:40.128 00:05:40.128 ' 00:05:40.128 10:47:28 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:40.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.128 --rc genhtml_branch_coverage=1 00:05:40.128 --rc genhtml_function_coverage=1 00:05:40.128 --rc genhtml_legend=1 00:05:40.128 --rc geninfo_all_blocks=1 00:05:40.128 --rc geninfo_unexecuted_blocks=1 00:05:40.128 00:05:40.128 ' 00:05:40.128 10:47:28 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:40.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.128 --rc genhtml_branch_coverage=1 00:05:40.128 --rc genhtml_function_coverage=1 00:05:40.128 --rc genhtml_legend=1 00:05:40.128 --rc geninfo_all_blocks=1 00:05:40.128 --rc geninfo_unexecuted_blocks=1 00:05:40.128 00:05:40.128 ' 00:05:40.128 10:47:28 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:40.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.128 --rc genhtml_branch_coverage=1 00:05:40.128 --rc genhtml_function_coverage=1 00:05:40.128 --rc genhtml_legend=1 00:05:40.128 --rc geninfo_all_blocks=1 00:05:40.128 --rc geninfo_unexecuted_blocks=1 00:05:40.128 00:05:40.128 ' 00:05:40.128 10:47:28 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:40.128 10:47:28 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:40.128 10:47:28 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:40.128 10:47:28 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:40.128 10:47:28 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:40.128 10:47:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.128 ************************************ 00:05:40.128 START TEST skip_rpc 00:05:40.128 ************************************ 00:05:40.128 10:47:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:05:40.128 10:47:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1269159 00:05:40.128 10:47:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:40.128 10:47:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.128 10:47:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:40.388 [2024-11-15 10:47:29.027877] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:05:40.388 [2024-11-15 10:47:29.027916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1269159 ] 00:05:40.388 [2024-11-15 10:47:29.088610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.388 [2024-11-15 10:47:29.128572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.655 10:47:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:45.655 10:47:33 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:45.655 10:47:33 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:45.655 10:47:33 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:45.655 10:47:33 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.655 10:47:33 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:45.655 10:47:33 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.655 10:47:33 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:45.655 10:47:33 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.655 10:47:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.655 10:47:33 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:45.655 10:47:33 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:45.655 10:47:33 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:45.655 10:47:33 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:45.655 10:47:33 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:45.655 10:47:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:45.655 10:47:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1269159 00:05:45.655 10:47:33 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 1269159 ']' 00:05:45.655 10:47:33 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 1269159 00:05:45.655 10:47:33 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:05:45.655 10:47:33 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:45.655 10:47:33 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1269159 00:05:45.655 10:47:34 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:45.655 10:47:34 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:45.655 10:47:34 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1269159' 00:05:45.655 killing process with pid 1269159 00:05:45.655 10:47:34 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 1269159 00:05:45.655 10:47:34 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 1269159 00:05:45.655 00:05:45.655 real 0m5.364s 00:05:45.655 user 0m5.144s 00:05:45.655 sys 0m0.254s 00:05:45.655 10:47:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:45.655 10:47:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.655 ************************************ 00:05:45.655 END TEST skip_rpc 00:05:45.655 ************************************ 00:05:45.655 10:47:34 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:45.655 10:47:34 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:45.655 10:47:34 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:45.655 10:47:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.655 ************************************ 00:05:45.655 START TEST skip_rpc_with_json 00:05:45.655 ************************************ 00:05:45.655 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:05:45.656 10:47:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:45.656 10:47:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1270053 00:05:45.656 10:47:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.656 10:47:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.656 10:47:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1270053 00:05:45.656 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 1270053 ']' 00:05:45.656 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.656 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:45.656 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.656 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:45.656 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:45.656 [2024-11-15 10:47:34.458483] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:05:45.656 [2024-11-15 10:47:34.458525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1270053 ] 00:05:45.656 [2024-11-15 10:47:34.522886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.914 [2024-11-15 10:47:34.566212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.914 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:45.914 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:05:45.914 10:47:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:45.914 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.914 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:45.914 [2024-11-15 10:47:34.783721] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:45.914 request: 00:05:45.914 { 00:05:45.914 "trtype": "tcp", 00:05:45.914 "method": "nvmf_get_transports", 00:05:45.914 "req_id": 1 00:05:45.914 } 00:05:45.914 Got JSON-RPC error response 00:05:45.914 response: 00:05:45.914 { 00:05:45.914 "code": -19, 00:05:45.914 "message": "No such device" 00:05:45.914 } 00:05:45.914 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:45.914 10:47:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:45.914 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.914 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:45.914 [2024-11-15 10:47:34.795831] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:45.915 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.915 10:47:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:45.915 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.915 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.173 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.174 10:47:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:46.174 { 00:05:46.174 "subsystems": [ 00:05:46.174 { 00:05:46.174 "subsystem": "fsdev", 00:05:46.174 "config": [ 00:05:46.174 { 00:05:46.174 "method": "fsdev_set_opts", 00:05:46.174 "params": { 00:05:46.174 "fsdev_io_pool_size": 65535, 00:05:46.174 "fsdev_io_cache_size": 256 00:05:46.174 } 00:05:46.174 } 00:05:46.174 ] 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "subsystem": "keyring", 00:05:46.174 "config": [] 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "subsystem": "iobuf", 00:05:46.174 "config": [ 00:05:46.174 { 00:05:46.174 "method": "iobuf_set_options", 00:05:46.174 "params": { 00:05:46.174 "small_pool_count": 8192, 00:05:46.174 "large_pool_count": 1024, 00:05:46.174 "small_bufsize": 8192, 00:05:46.174 "large_bufsize": 135168, 00:05:46.174 "enable_numa": false 00:05:46.174 } 00:05:46.174 } 00:05:46.174 ] 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "subsystem": "sock", 00:05:46.174 "config": [ 00:05:46.174 { 00:05:46.174 "method": "sock_set_default_impl", 00:05:46.174 "params": { 00:05:46.174 "impl_name": "posix" 00:05:46.174 } 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "method": "sock_impl_set_options", 00:05:46.174 "params": { 00:05:46.174 "impl_name": "ssl", 00:05:46.174 "recv_buf_size": 4096, 00:05:46.174 "send_buf_size": 4096, 00:05:46.174 "enable_recv_pipe": true, 00:05:46.174 "enable_quickack": false, 00:05:46.174 "enable_placement_id": 0, 00:05:46.174 "enable_zerocopy_send_server": true, 00:05:46.174 "enable_zerocopy_send_client": false, 00:05:46.174 "zerocopy_threshold": 0, 00:05:46.174 "tls_version": 0, 00:05:46.174 "enable_ktls": false 00:05:46.174 } 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "method": "sock_impl_set_options", 00:05:46.174 "params": { 00:05:46.174 "impl_name": "posix", 00:05:46.174 "recv_buf_size": 2097152, 00:05:46.174 "send_buf_size": 2097152, 00:05:46.174 "enable_recv_pipe": true, 00:05:46.174 "enable_quickack": false, 00:05:46.174 "enable_placement_id": 0, 00:05:46.174 "enable_zerocopy_send_server": true, 00:05:46.174 "enable_zerocopy_send_client": false, 00:05:46.174 "zerocopy_threshold": 0, 00:05:46.174 "tls_version": 0, 00:05:46.174 "enable_ktls": false 00:05:46.174 } 00:05:46.174 } 00:05:46.174 ] 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "subsystem": "vmd", 00:05:46.174 "config": [] 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "subsystem": "accel", 00:05:46.174 "config": [ 00:05:46.174 { 00:05:46.174 "method": "accel_set_options", 00:05:46.174 "params": { 00:05:46.174 "small_cache_size": 128, 00:05:46.174 "large_cache_size": 16, 00:05:46.174 "task_count": 2048, 00:05:46.174 "sequence_count": 2048, 00:05:46.174 "buf_count": 2048 00:05:46.174 } 00:05:46.174 } 00:05:46.174 ] 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "subsystem": "bdev", 00:05:46.174 "config": [ 00:05:46.174 { 00:05:46.174 "method": "bdev_set_options", 00:05:46.174 "params": { 00:05:46.174 "bdev_io_pool_size": 65535, 00:05:46.174 "bdev_io_cache_size": 256, 00:05:46.174 "bdev_auto_examine": true, 00:05:46.174 "iobuf_small_cache_size": 128, 00:05:46.174 "iobuf_large_cache_size": 16 00:05:46.174 } 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "method": "bdev_raid_set_options", 00:05:46.174 "params": { 00:05:46.174 "process_window_size_kb": 1024, 00:05:46.174 "process_max_bandwidth_mb_sec": 0 00:05:46.174 } 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "method": "bdev_iscsi_set_options", 00:05:46.174 "params": { 00:05:46.174 "timeout_sec": 30 00:05:46.174 } 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "method": "bdev_nvme_set_options", 00:05:46.174 "params": { 00:05:46.174 "action_on_timeout": "none", 00:05:46.174 "timeout_us": 0, 00:05:46.174 "timeout_admin_us": 0, 00:05:46.174 "keep_alive_timeout_ms": 10000, 00:05:46.174 "arbitration_burst": 0, 00:05:46.174 "low_priority_weight": 0, 00:05:46.174 "medium_priority_weight": 0, 00:05:46.174 "high_priority_weight": 0, 00:05:46.174 "nvme_adminq_poll_period_us": 10000, 00:05:46.174 "nvme_ioq_poll_period_us": 0, 00:05:46.174 "io_queue_requests": 0, 00:05:46.174 "delay_cmd_submit": true, 00:05:46.174 "transport_retry_count": 4, 00:05:46.174 "bdev_retry_count": 3, 00:05:46.174 "transport_ack_timeout": 0, 00:05:46.174 "ctrlr_loss_timeout_sec": 0, 00:05:46.174 "reconnect_delay_sec": 0, 00:05:46.174 "fast_io_fail_timeout_sec": 0, 00:05:46.174 "disable_auto_failback": false, 00:05:46.174 "generate_uuids": false, 00:05:46.174 "transport_tos": 0, 00:05:46.174 "nvme_error_stat": false, 00:05:46.174 "rdma_srq_size": 0, 00:05:46.174 "io_path_stat": false, 00:05:46.174 "allow_accel_sequence": false, 00:05:46.174 "rdma_max_cq_size": 0, 00:05:46.174 "rdma_cm_event_timeout_ms": 0, 00:05:46.174 "dhchap_digests": [ 00:05:46.174 "sha256", 00:05:46.174 "sha384", 00:05:46.174 "sha512" 00:05:46.174 ], 00:05:46.174 "dhchap_dhgroups": [ 00:05:46.174 "null", 00:05:46.174 "ffdhe2048", 00:05:46.174 "ffdhe3072", 00:05:46.174 "ffdhe4096", 00:05:46.174 "ffdhe6144", 00:05:46.174 "ffdhe8192" 00:05:46.174 ] 00:05:46.174 } 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "method": "bdev_nvme_set_hotplug", 00:05:46.174 "params": { 00:05:46.174 "period_us": 100000, 00:05:46.174 "enable": false 00:05:46.174 } 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "method": "bdev_wait_for_examine" 00:05:46.174 } 00:05:46.174 ] 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "subsystem": "scsi", 00:05:46.174 "config": null 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "subsystem": "scheduler", 00:05:46.174 "config": [ 00:05:46.174 { 00:05:46.174 "method": "framework_set_scheduler", 00:05:46.174 "params": { 00:05:46.174 "name": "static" 00:05:46.174 } 00:05:46.174 } 00:05:46.174 ] 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "subsystem": "vhost_scsi", 00:05:46.174 "config": [] 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "subsystem": "vhost_blk", 00:05:46.174 "config": [] 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "subsystem": "ublk", 00:05:46.174 "config": [] 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "subsystem": "nbd", 00:05:46.174 "config": [] 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "subsystem": "nvmf", 00:05:46.174 "config": [ 00:05:46.174 { 00:05:46.174 "method": "nvmf_set_config", 00:05:46.174 "params": { 00:05:46.174 "discovery_filter": "match_any", 00:05:46.174 "admin_cmd_passthru": { 00:05:46.174 "identify_ctrlr": false 00:05:46.174 }, 00:05:46.174 "dhchap_digests": [ 00:05:46.174 "sha256", 00:05:46.174 "sha384", 00:05:46.174 "sha512" 00:05:46.174 ], 00:05:46.174 "dhchap_dhgroups": [ 00:05:46.174 "null", 00:05:46.174 "ffdhe2048", 00:05:46.174 "ffdhe3072", 00:05:46.174 "ffdhe4096", 00:05:46.174 "ffdhe6144", 00:05:46.174 "ffdhe8192" 00:05:46.174 ] 00:05:46.174 } 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "method": "nvmf_set_max_subsystems", 00:05:46.174 "params": { 00:05:46.174 "max_subsystems": 1024 00:05:46.174 } 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "method": "nvmf_set_crdt", 00:05:46.174 "params": { 00:05:46.174 "crdt1": 0, 00:05:46.174 "crdt2": 0, 00:05:46.174 "crdt3": 0 00:05:46.174 } 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "method": "nvmf_create_transport", 00:05:46.174 "params": { 00:05:46.174 "trtype": "TCP", 00:05:46.174 "max_queue_depth": 128, 00:05:46.174 "max_io_qpairs_per_ctrlr": 127, 00:05:46.174 "in_capsule_data_size": 4096, 00:05:46.174 "max_io_size": 131072, 00:05:46.174 "io_unit_size": 131072, 00:05:46.174 "max_aq_depth": 128, 00:05:46.174 "num_shared_buffers": 511, 00:05:46.174 "buf_cache_size": 4294967295, 00:05:46.174 "dif_insert_or_strip": false, 00:05:46.174 "zcopy": false, 00:05:46.174 "c2h_success": true, 00:05:46.174 "sock_priority": 0, 00:05:46.174 "abort_timeout_sec": 1, 00:05:46.174 "ack_timeout": 0, 00:05:46.174 "data_wr_pool_size": 0 00:05:46.174 } 00:05:46.174 } 00:05:46.174 ] 00:05:46.174 }, 00:05:46.174 { 00:05:46.174 "subsystem": "iscsi", 00:05:46.174 "config": [ 00:05:46.174 { 00:05:46.174 "method": "iscsi_set_options", 00:05:46.174 "params": { 00:05:46.174 "node_base": "iqn.2016-06.io.spdk", 00:05:46.174 "max_sessions": 128, 00:05:46.174 "max_connections_per_session": 2, 00:05:46.174 "max_queue_depth": 64, 00:05:46.174 "default_time2wait": 2, 00:05:46.175 "default_time2retain": 20, 00:05:46.175 "first_burst_length": 8192, 00:05:46.175 "immediate_data": true, 00:05:46.175 "allow_duplicated_isid": false, 00:05:46.175 "error_recovery_level": 0, 00:05:46.175 "nop_timeout": 60, 00:05:46.175 "nop_in_interval": 30, 00:05:46.175 "disable_chap": false, 00:05:46.175 "require_chap": false, 00:05:46.175 "mutual_chap": false, 00:05:46.175 "chap_group": 0, 00:05:46.175 "max_large_datain_per_connection": 64, 00:05:46.175 "max_r2t_per_connection": 4, 00:05:46.175 "pdu_pool_size": 36864, 00:05:46.175 "immediate_data_pool_size": 16384, 00:05:46.175 "data_out_pool_size": 2048 00:05:46.175 } 00:05:46.175 } 00:05:46.175 ] 00:05:46.175 } 00:05:46.175 ] 00:05:46.175 } 00:05:46.175 10:47:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:46.175 10:47:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1270053 00:05:46.175 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 1270053 ']' 00:05:46.175 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 1270053 00:05:46.175 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:46.175 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:46.175 10:47:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1270053 00:05:46.175 10:47:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:46.175 10:47:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:46.175 10:47:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1270053' 00:05:46.175 killing process with pid 1270053 00:05:46.175 10:47:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 1270053 00:05:46.175 10:47:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 1270053 00:05:46.433 10:47:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1270204 00:05:46.433 10:47:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:46.433 10:47:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:51.804 10:47:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1270204 00:05:51.804 10:47:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 1270204 ']' 00:05:51.804 10:47:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 1270204 00:05:51.804 10:47:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:51.804 10:47:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:51.804 10:47:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1270204 00:05:51.804 10:47:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:51.804 10:47:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:51.804 10:47:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1270204' 00:05:51.804 killing process with pid 1270204 00:05:51.804 10:47:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 1270204 00:05:51.804 10:47:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 1270204 00:05:51.804 10:47:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:51.804 10:47:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:51.804 00:05:51.804 real 0m6.277s 00:05:51.804 user 0m5.993s 00:05:51.804 sys 0m0.565s 00:05:51.804 10:47:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:51.804 10:47:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:51.804 ************************************ 00:05:51.804 END TEST skip_rpc_with_json 00:05:51.804 ************************************ 00:05:52.063 10:47:40 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:52.063 10:47:40 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:52.063 10:47:40 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:52.063 10:47:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.063 ************************************ 00:05:52.063 START TEST skip_rpc_with_delay 00:05:52.063 ************************************ 00:05:52.063 10:47:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:05:52.063 10:47:40 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:52.063 10:47:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:52.063 10:47:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:52.063 10:47:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.063 10:47:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.063 10:47:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.063 10:47:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.063 10:47:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.063 10:47:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.063 10:47:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.063 10:47:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:52.063 10:47:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:52.063 [2024-11-15 10:47:40.809562] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:52.063 10:47:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:52.063 10:47:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:52.063 10:47:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:52.063 10:47:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:52.063 00:05:52.063 real 0m0.069s 00:05:52.063 user 0m0.045s 00:05:52.063 sys 0m0.023s 00:05:52.063 10:47:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:52.063 10:47:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:52.063 ************************************ 00:05:52.063 END TEST skip_rpc_with_delay 00:05:52.063 ************************************ 00:05:52.063 10:47:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:52.063 10:47:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:52.063 10:47:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:52.063 10:47:40 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:52.063 10:47:40 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:52.063 10:47:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.063 ************************************ 00:05:52.063 START TEST exit_on_failed_rpc_init 00:05:52.063 ************************************ 00:05:52.063 10:47:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:05:52.063 10:47:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1271199 00:05:52.063 10:47:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1271199 00:05:52.063 10:47:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.063 10:47:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 1271199 ']' 00:05:52.063 10:47:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.063 10:47:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:52.063 10:47:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.063 10:47:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:52.063 10:47:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:52.063 [2024-11-15 10:47:40.947753] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:05:52.063 [2024-11-15 10:47:40.947800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1271199 ] 00:05:52.322 [2024-11-15 10:47:41.010552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.322 [2024-11-15 10:47:41.050328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.581 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:52.581 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:05:52.581 10:47:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.581 10:47:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:52.581 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:52.581 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:52.581 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.581 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.581 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.581 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.581 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.581 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.581 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.581 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:52.581 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:52.581 [2024-11-15 10:47:41.320045] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:05:52.581 [2024-11-15 10:47:41.320094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1271253 ] 00:05:52.581 [2024-11-15 10:47:41.382838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.581 [2024-11-15 10:47:41.424202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.581 [2024-11-15 10:47:41.424263] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:52.581 [2024-11-15 10:47:41.424273] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:52.581 [2024-11-15 10:47:41.424280] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:52.840 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:52.840 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:52.840 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:52.840 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:52.840 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:52.840 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:52.840 10:47:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:52.840 10:47:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1271199 00:05:52.840 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 1271199 ']' 00:05:52.840 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 1271199 00:05:52.840 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:52.840 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:52.840 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1271199 00:05:52.840 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:52.840 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:52.840 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1271199' 00:05:52.840 killing process with pid 1271199 00:05:52.840 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 1271199 00:05:52.840 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 1271199 00:05:53.099 00:05:53.099 real 0m0.931s 00:05:53.099 user 0m0.995s 00:05:53.099 sys 0m0.361s 00:05:53.099 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:53.099 10:47:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:53.099 ************************************ 00:05:53.099 END TEST exit_on_failed_rpc_init 00:05:53.099 ************************************ 00:05:53.099 10:47:41 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:53.099 00:05:53.099 real 0m13.092s 00:05:53.099 user 0m12.378s 00:05:53.099 sys 0m1.480s 00:05:53.099 10:47:41 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:53.099 10:47:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.099 ************************************ 00:05:53.099 END TEST skip_rpc 00:05:53.099 ************************************ 00:05:53.099 10:47:41 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:53.099 10:47:41 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:53.099 10:47:41 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:53.099 10:47:41 -- common/autotest_common.sh@10 -- # set +x 00:05:53.099 ************************************ 00:05:53.099 START TEST rpc_client 00:05:53.099 ************************************ 00:05:53.099 10:47:41 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:53.359 * Looking for test storage... 00:05:53.359 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:53.359 10:47:42 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:53.359 10:47:42 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:53.359 10:47:42 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:53.359 10:47:42 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.359 10:47:42 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:53.360 10:47:42 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.360 10:47:42 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.360 10:47:42 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.360 10:47:42 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:53.360 10:47:42 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.360 10:47:42 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:53.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.360 --rc genhtml_branch_coverage=1 00:05:53.360 --rc genhtml_function_coverage=1 00:05:53.360 --rc genhtml_legend=1 00:05:53.360 --rc geninfo_all_blocks=1 00:05:53.360 --rc geninfo_unexecuted_blocks=1 00:05:53.360 00:05:53.360 ' 00:05:53.360 10:47:42 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:53.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.360 --rc genhtml_branch_coverage=1 00:05:53.360 --rc genhtml_function_coverage=1 00:05:53.360 --rc genhtml_legend=1 00:05:53.360 --rc geninfo_all_blocks=1 00:05:53.360 --rc geninfo_unexecuted_blocks=1 00:05:53.360 00:05:53.360 ' 00:05:53.360 10:47:42 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:53.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.360 --rc genhtml_branch_coverage=1 00:05:53.360 --rc genhtml_function_coverage=1 00:05:53.360 --rc genhtml_legend=1 00:05:53.360 --rc geninfo_all_blocks=1 00:05:53.360 --rc geninfo_unexecuted_blocks=1 00:05:53.360 00:05:53.360 ' 00:05:53.360 10:47:42 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:53.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.360 --rc genhtml_branch_coverage=1 00:05:53.360 --rc genhtml_function_coverage=1 00:05:53.360 --rc genhtml_legend=1 00:05:53.360 --rc geninfo_all_blocks=1 00:05:53.360 --rc geninfo_unexecuted_blocks=1 00:05:53.360 00:05:53.360 ' 00:05:53.360 10:47:42 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:53.360 OK 00:05:53.360 10:47:42 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:53.360 00:05:53.360 real 0m0.192s 00:05:53.360 user 0m0.112s 00:05:53.360 sys 0m0.094s 00:05:53.360 10:47:42 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:53.360 10:47:42 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:53.360 ************************************ 00:05:53.360 END TEST rpc_client 00:05:53.360 ************************************ 00:05:53.360 10:47:42 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:53.360 10:47:42 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:53.360 10:47:42 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:53.360 10:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:53.360 ************************************ 00:05:53.360 START TEST json_config 00:05:53.360 ************************************ 00:05:53.360 10:47:42 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:53.620 10:47:42 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:53.620 10:47:42 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:53.620 10:47:42 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:53.620 10:47:42 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:53.620 10:47:42 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.620 10:47:42 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.620 10:47:42 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.620 10:47:42 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.620 10:47:42 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.620 10:47:42 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.620 10:47:42 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.620 10:47:42 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.620 10:47:42 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.620 10:47:42 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.620 10:47:42 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.620 10:47:42 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:53.620 10:47:42 json_config -- scripts/common.sh@345 -- # : 1 00:05:53.620 10:47:42 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.620 10:47:42 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.620 10:47:42 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:53.620 10:47:42 json_config -- scripts/common.sh@353 -- # local d=1 00:05:53.620 10:47:42 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.620 10:47:42 json_config -- scripts/common.sh@355 -- # echo 1 00:05:53.620 10:47:42 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.620 10:47:42 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:53.620 10:47:42 json_config -- scripts/common.sh@353 -- # local d=2 00:05:53.620 10:47:42 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.620 10:47:42 json_config -- scripts/common.sh@355 -- # echo 2 00:05:53.620 10:47:42 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.620 10:47:42 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.620 10:47:42 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.620 10:47:42 json_config -- scripts/common.sh@368 -- # return 0 00:05:53.620 10:47:42 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.620 10:47:42 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:53.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.620 --rc genhtml_branch_coverage=1 00:05:53.620 --rc genhtml_function_coverage=1 00:05:53.620 --rc genhtml_legend=1 00:05:53.620 --rc geninfo_all_blocks=1 00:05:53.620 --rc geninfo_unexecuted_blocks=1 00:05:53.620 00:05:53.620 ' 00:05:53.620 10:47:42 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:53.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.620 --rc genhtml_branch_coverage=1 00:05:53.620 --rc genhtml_function_coverage=1 00:05:53.620 --rc genhtml_legend=1 00:05:53.620 --rc geninfo_all_blocks=1 00:05:53.620 --rc geninfo_unexecuted_blocks=1 00:05:53.620 00:05:53.620 ' 00:05:53.620 10:47:42 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:53.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.620 --rc genhtml_branch_coverage=1 00:05:53.620 --rc genhtml_function_coverage=1 00:05:53.620 --rc genhtml_legend=1 00:05:53.620 --rc geninfo_all_blocks=1 00:05:53.620 --rc geninfo_unexecuted_blocks=1 00:05:53.620 00:05:53.620 ' 00:05:53.620 10:47:42 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:53.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.620 --rc genhtml_branch_coverage=1 00:05:53.620 --rc genhtml_function_coverage=1 00:05:53.620 --rc genhtml_legend=1 00:05:53.620 --rc geninfo_all_blocks=1 00:05:53.620 --rc geninfo_unexecuted_blocks=1 00:05:53.620 00:05:53.620 ' 00:05:53.620 10:47:42 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:53.620 10:47:42 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:53.620 10:47:42 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:53.620 10:47:42 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:53.620 10:47:42 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:53.620 10:47:42 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:53.620 10:47:42 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:53.620 10:47:42 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:53.620 10:47:42 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:53.620 10:47:42 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:53.620 10:47:42 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:53.620 10:47:42 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:53.620 10:47:42 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:05:53.620 10:47:42 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:05:53.620 10:47:42 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:53.620 10:47:42 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:53.620 10:47:42 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:53.620 10:47:42 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:53.620 10:47:42 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:53.620 10:47:42 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:53.620 10:47:42 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:53.620 10:47:42 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:53.620 10:47:42 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:53.621 10:47:42 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.621 10:47:42 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.621 10:47:42 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.621 10:47:42 json_config -- paths/export.sh@5 -- # export PATH 00:05:53.621 10:47:42 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.621 10:47:42 json_config -- nvmf/common.sh@51 -- # : 0 00:05:53.621 10:47:42 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:53.621 10:47:42 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:53.621 10:47:42 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:53.621 10:47:42 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:53.621 10:47:42 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:53.621 10:47:42 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:53.621 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:53.621 10:47:42 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:53.621 10:47:42 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:53.621 10:47:42 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:53.621 10:47:42 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:53.621 10:47:42 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:53.621 10:47:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:53.621 10:47:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:53.621 10:47:42 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:53.621 10:47:42 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:53.621 10:47:42 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:53.621 10:47:42 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:53.621 10:47:42 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:53.621 10:47:42 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:53.621 10:47:42 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:53.621 10:47:42 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:53.621 10:47:42 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:53.621 10:47:42 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:53.621 10:47:42 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:53.621 10:47:42 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:53.621 INFO: JSON configuration test init 00:05:53.621 10:47:42 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:53.621 10:47:42 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:53.621 10:47:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:53.621 10:47:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.621 10:47:42 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:53.621 10:47:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:53.621 10:47:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.621 10:47:42 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:53.621 10:47:42 json_config -- json_config/common.sh@9 -- # local app=target 00:05:53.621 10:47:42 json_config -- json_config/common.sh@10 -- # shift 00:05:53.621 10:47:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:53.621 10:47:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:53.621 10:47:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:53.621 10:47:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:53.621 10:47:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:53.621 10:47:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1271567 00:05:53.621 10:47:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:53.621 Waiting for target to run... 00:05:53.621 10:47:42 json_config -- json_config/common.sh@25 -- # waitforlisten 1271567 /var/tmp/spdk_tgt.sock 00:05:53.621 10:47:42 json_config -- common/autotest_common.sh@833 -- # '[' -z 1271567 ']' 00:05:53.621 10:47:42 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:53.621 10:47:42 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:53.621 10:47:42 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:53.621 10:47:42 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:53.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:53.621 10:47:42 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:53.621 10:47:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.621 [2024-11-15 10:47:42.433271] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:05:53.621 [2024-11-15 10:47:42.433320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1271567 ] 00:05:54.188 [2024-11-15 10:47:42.883732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.188 [2024-11-15 10:47:42.938906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.447 10:47:43 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:54.447 10:47:43 json_config -- common/autotest_common.sh@866 -- # return 0 00:05:54.447 10:47:43 json_config -- json_config/common.sh@26 -- # echo '' 00:05:54.447 00:05:54.447 10:47:43 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:54.447 10:47:43 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:54.447 10:47:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:54.447 10:47:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.447 10:47:43 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:54.447 10:47:43 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:54.447 10:47:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:54.447 10:47:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.447 10:47:43 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:54.447 10:47:43 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:54.447 10:47:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:57.732 10:47:46 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:57.732 10:47:46 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:57.732 10:47:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:57.732 10:47:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.732 10:47:46 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:57.732 10:47:46 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:57.732 10:47:46 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:57.732 10:47:46 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:57.732 10:47:46 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:57.732 10:47:46 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:57.732 10:47:46 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:57.732 10:47:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:57.732 10:47:46 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:57.732 10:47:46 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:57.732 10:47:46 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:57.732 10:47:46 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:57.732 10:47:46 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:57.732 10:47:46 json_config -- json_config/json_config.sh@54 -- # sort 00:05:57.732 10:47:46 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:57.732 10:47:46 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:57.732 10:47:46 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:57.732 10:47:46 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:57.732 10:47:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:57.732 10:47:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.989 10:47:46 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:57.989 10:47:46 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:57.989 10:47:46 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:57.989 10:47:46 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:57.989 10:47:46 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:57.990 10:47:46 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:57.990 10:47:46 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:57.990 10:47:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:57.990 10:47:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.990 10:47:46 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:57.990 10:47:46 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:05:57.990 10:47:46 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:05:57.990 10:47:46 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:05:57.990 10:47:46 json_config -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:57.990 10:47:46 json_config -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:57.990 10:47:46 json_config -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:57.990 10:47:46 json_config -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:57.990 10:47:46 json_config -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:57.990 10:47:46 json_config -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:57.990 10:47:46 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:57.990 10:47:46 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:57.990 10:47:46 json_config -- nvmf/common.sh@442 -- # [[ phy-fallback != virt ]] 00:05:57.990 10:47:46 json_config -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:57.990 10:47:46 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:05:57.990 10:47:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@320 -- # e810=() 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@321 -- # x722=() 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@322 -- # mlx=() 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:06:03.258 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:06:03.258 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:06:03.258 Found net devices under 0000:af:00.0: mlx_0_0 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:06:03.258 Found net devices under 0000:af:00.1: mlx_0_1 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@442 -- # is_hw=yes 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@448 -- # rdma_device_init 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@62 -- # uname 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:03.258 10:47:52 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:03.517 10:47:52 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:03.517 10:47:52 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:03.517 10:47:52 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:03.517 10:47:52 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:03.517 10:47:52 json_config -- nvmf/common.sh@530 -- # allocate_nic_ips 00:06:03.517 10:47:52 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:03.517 10:47:52 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:03.517 10:47:52 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:03.517 10:47:52 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:03.517 10:47:52 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:03.517 10:47:52 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@109 -- # continue 2 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@109 -- # continue 2 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:03.518 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:03.518 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:06:03.518 altname enp175s0f0np0 00:06:03.518 altname ens801f0np0 00:06:03.518 inet 192.168.100.8/24 scope global mlx_0_0 00:06:03.518 valid_lft forever preferred_lft forever 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:03.518 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:03.518 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:06:03.518 altname enp175s0f1np1 00:06:03.518 altname ens801f1np1 00:06:03.518 inet 192.168.100.9/24 scope global mlx_0_1 00:06:03.518 valid_lft forever preferred_lft forever 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@450 -- # return 0 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@109 -- # continue 2 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@109 -- # continue 2 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:06:03.518 192.168.100.9' 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:06:03.518 192.168.100.9' 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@485 -- # head -n 1 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:06:03.518 192.168.100.9' 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@486 -- # tail -n +2 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@486 -- # head -n 1 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:06:03.518 10:47:52 json_config -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:06:03.518 10:47:52 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:06:03.518 10:47:52 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:03.518 10:47:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:03.777 MallocForNvmf0 00:06:03.777 10:47:52 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:03.777 10:47:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:04.035 MallocForNvmf1 00:06:04.035 10:47:52 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:06:04.035 10:47:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:06:04.035 [2024-11-15 10:47:52.883751] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:06:04.035 [2024-11-15 10:47:52.911564] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21472c0/0x201b7c0) succeed. 00:06:04.293 [2024-11-15 10:47:52.923694] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21462b0/0x209b800) succeed. 00:06:04.293 10:47:52 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:04.293 10:47:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:04.293 10:47:53 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:04.293 10:47:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:04.552 10:47:53 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:04.552 10:47:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:04.810 10:47:53 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:04.810 10:47:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:05.068 [2024-11-15 10:47:53.704126] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:05.068 10:47:53 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:05.068 10:47:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:05.068 10:47:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.068 10:47:53 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:05.068 10:47:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:05.068 10:47:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.068 10:47:53 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:05.068 10:47:53 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:05.068 10:47:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:05.327 MallocBdevForConfigChangeCheck 00:06:05.327 10:47:53 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:05.327 10:47:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:05.327 10:47:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.327 10:47:54 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:05.327 10:47:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:05.585 10:47:54 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:05.585 INFO: shutting down applications... 00:06:05.585 10:47:54 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:05.585 10:47:54 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:05.585 10:47:54 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:05.585 10:47:54 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:07.489 Calling clear_iscsi_subsystem 00:06:07.489 Calling clear_nvmf_subsystem 00:06:07.489 Calling clear_nbd_subsystem 00:06:07.489 Calling clear_ublk_subsystem 00:06:07.489 Calling clear_vhost_blk_subsystem 00:06:07.489 Calling clear_vhost_scsi_subsystem 00:06:07.489 Calling clear_bdev_subsystem 00:06:07.489 10:47:55 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:06:07.489 10:47:55 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:07.489 10:47:55 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:07.489 10:47:55 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.489 10:47:55 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:07.489 10:47:55 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:07.489 10:47:56 json_config -- json_config/json_config.sh@352 -- # break 00:06:07.489 10:47:56 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:07.489 10:47:56 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:07.489 10:47:56 json_config -- json_config/common.sh@31 -- # local app=target 00:06:07.489 10:47:56 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:07.489 10:47:56 json_config -- json_config/common.sh@35 -- # [[ -n 1271567 ]] 00:06:07.489 10:47:56 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1271567 00:06:07.489 10:47:56 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:07.489 10:47:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:07.489 10:47:56 json_config -- json_config/common.sh@41 -- # kill -0 1271567 00:06:07.489 10:47:56 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:08.058 10:47:56 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:08.058 10:47:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.058 10:47:56 json_config -- json_config/common.sh@41 -- # kill -0 1271567 00:06:08.058 10:47:56 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:08.058 10:47:56 json_config -- json_config/common.sh@43 -- # break 00:06:08.058 10:47:56 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:08.058 10:47:56 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:08.058 SPDK target shutdown done 00:06:08.058 10:47:56 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:08.058 INFO: relaunching applications... 00:06:08.058 10:47:56 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.058 10:47:56 json_config -- json_config/common.sh@9 -- # local app=target 00:06:08.058 10:47:56 json_config -- json_config/common.sh@10 -- # shift 00:06:08.058 10:47:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:08.058 10:47:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:08.058 10:47:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:08.058 10:47:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.058 10:47:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.058 10:47:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1276353 00:06:08.058 10:47:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:08.058 Waiting for target to run... 00:06:08.058 10:47:56 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.058 10:47:56 json_config -- json_config/common.sh@25 -- # waitforlisten 1276353 /var/tmp/spdk_tgt.sock 00:06:08.058 10:47:56 json_config -- common/autotest_common.sh@833 -- # '[' -z 1276353 ']' 00:06:08.058 10:47:56 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.058 10:47:56 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:08.058 10:47:56 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.058 10:47:56 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:08.058 10:47:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.058 [2024-11-15 10:47:56.846400] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:08.058 [2024-11-15 10:47:56.846458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1276353 ] 00:06:08.625 [2024-11-15 10:47:57.289441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.625 [2024-11-15 10:47:57.347249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.913 [2024-11-15 10:48:00.400770] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x232acc0/0x23365f0) succeed. 00:06:11.913 [2024-11-15 10:48:00.411742] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x232ceb0/0x23b6280) succeed. 00:06:11.913 [2024-11-15 10:48:00.461390] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:12.481 10:48:01 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:12.481 10:48:01 json_config -- common/autotest_common.sh@866 -- # return 0 00:06:12.481 10:48:01 json_config -- json_config/common.sh@26 -- # echo '' 00:06:12.481 00:06:12.481 10:48:01 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:12.481 10:48:01 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:12.481 INFO: Checking if target configuration is the same... 00:06:12.481 10:48:01 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.481 10:48:01 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:12.481 10:48:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:12.481 + '[' 2 -ne 2 ']' 00:06:12.481 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:12.481 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:12.481 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:12.481 +++ basename /dev/fd/62 00:06:12.481 ++ mktemp /tmp/62.XXX 00:06:12.481 + tmp_file_1=/tmp/62.kcU 00:06:12.481 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.481 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:12.481 + tmp_file_2=/tmp/spdk_tgt_config.json.aMh 00:06:12.481 + ret=0 00:06:12.481 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:12.739 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:12.739 + diff -u /tmp/62.kcU /tmp/spdk_tgt_config.json.aMh 00:06:12.739 + echo 'INFO: JSON config files are the same' 00:06:12.739 INFO: JSON config files are the same 00:06:12.739 + rm /tmp/62.kcU /tmp/spdk_tgt_config.json.aMh 00:06:12.739 + exit 0 00:06:12.739 10:48:01 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:12.739 10:48:01 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:12.739 INFO: changing configuration and checking if this can be detected... 00:06:12.739 10:48:01 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:12.739 10:48:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:12.998 10:48:01 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:12.998 10:48:01 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.998 10:48:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:12.998 + '[' 2 -ne 2 ']' 00:06:12.998 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:12.998 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:12.998 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:12.998 +++ basename /dev/fd/62 00:06:12.998 ++ mktemp /tmp/62.XXX 00:06:12.998 + tmp_file_1=/tmp/62.EW1 00:06:12.998 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.998 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:12.998 + tmp_file_2=/tmp/spdk_tgt_config.json.RJ2 00:06:12.998 + ret=0 00:06:12.998 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:13.257 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:13.257 + diff -u /tmp/62.EW1 /tmp/spdk_tgt_config.json.RJ2 00:06:13.257 + ret=1 00:06:13.257 + echo '=== Start of file: /tmp/62.EW1 ===' 00:06:13.257 + cat /tmp/62.EW1 00:06:13.257 + echo '=== End of file: /tmp/62.EW1 ===' 00:06:13.257 + echo '' 00:06:13.257 + echo '=== Start of file: /tmp/spdk_tgt_config.json.RJ2 ===' 00:06:13.257 + cat /tmp/spdk_tgt_config.json.RJ2 00:06:13.257 + echo '=== End of file: /tmp/spdk_tgt_config.json.RJ2 ===' 00:06:13.257 + echo '' 00:06:13.257 + rm /tmp/62.EW1 /tmp/spdk_tgt_config.json.RJ2 00:06:13.257 + exit 1 00:06:13.257 10:48:02 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:13.257 INFO: configuration change detected. 00:06:13.257 10:48:02 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:13.257 10:48:02 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:13.257 10:48:02 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.257 10:48:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.257 10:48:02 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:13.257 10:48:02 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:13.257 10:48:02 json_config -- json_config/json_config.sh@324 -- # [[ -n 1276353 ]] 00:06:13.257 10:48:02 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:13.257 10:48:02 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:13.257 10:48:02 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.257 10:48:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.257 10:48:02 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:13.257 10:48:02 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:13.257 10:48:02 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:13.257 10:48:02 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:13.257 10:48:02 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:13.257 10:48:02 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:13.257 10:48:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:13.257 10:48:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.257 10:48:02 json_config -- json_config/json_config.sh@330 -- # killprocess 1276353 00:06:13.257 10:48:02 json_config -- common/autotest_common.sh@952 -- # '[' -z 1276353 ']' 00:06:13.257 10:48:02 json_config -- common/autotest_common.sh@956 -- # kill -0 1276353 00:06:13.257 10:48:02 json_config -- common/autotest_common.sh@957 -- # uname 00:06:13.257 10:48:02 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:13.257 10:48:02 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1276353 00:06:13.516 10:48:02 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:13.516 10:48:02 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:13.516 10:48:02 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1276353' 00:06:13.516 killing process with pid 1276353 00:06:13.516 10:48:02 json_config -- common/autotest_common.sh@971 -- # kill 1276353 00:06:13.516 10:48:02 json_config -- common/autotest_common.sh@976 -- # wait 1276353 00:06:14.892 10:48:03 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.892 10:48:03 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:14.892 10:48:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:14.892 10:48:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.892 10:48:03 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:14.892 10:48:03 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:14.892 INFO: Success 00:06:14.892 10:48:03 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:06:14.892 10:48:03 json_config -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:14.892 10:48:03 json_config -- nvmf/common.sh@121 -- # sync 00:06:14.892 10:48:03 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:06:14.892 10:48:03 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:06:14.892 10:48:03 json_config -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:06:14.892 10:48:03 json_config -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:14.892 10:48:03 json_config -- nvmf/common.sh@523 -- # [[ '' == \t\c\p ]] 00:06:14.892 00:06:14.892 real 0m21.543s 00:06:14.892 user 0m23.436s 00:06:14.892 sys 0m6.990s 00:06:14.892 10:48:03 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:14.892 10:48:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.892 ************************************ 00:06:14.892 END TEST json_config 00:06:14.892 ************************************ 00:06:14.892 10:48:03 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:14.892 10:48:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:14.892 10:48:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:14.892 10:48:03 -- common/autotest_common.sh@10 -- # set +x 00:06:15.151 ************************************ 00:06:15.151 START TEST json_config_extra_key 00:06:15.151 ************************************ 00:06:15.151 10:48:03 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:15.151 10:48:03 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:15.151 10:48:03 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:15.151 10:48:03 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:06:15.152 10:48:03 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:15.152 10:48:03 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.152 10:48:03 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:15.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.152 --rc genhtml_branch_coverage=1 00:06:15.152 --rc genhtml_function_coverage=1 00:06:15.152 --rc genhtml_legend=1 00:06:15.152 --rc geninfo_all_blocks=1 00:06:15.152 --rc geninfo_unexecuted_blocks=1 00:06:15.152 00:06:15.152 ' 00:06:15.152 10:48:03 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:15.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.152 --rc genhtml_branch_coverage=1 00:06:15.152 --rc genhtml_function_coverage=1 00:06:15.152 --rc genhtml_legend=1 00:06:15.152 --rc geninfo_all_blocks=1 00:06:15.152 --rc geninfo_unexecuted_blocks=1 00:06:15.152 00:06:15.152 ' 00:06:15.152 10:48:03 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:15.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.152 --rc genhtml_branch_coverage=1 00:06:15.152 --rc genhtml_function_coverage=1 00:06:15.152 --rc genhtml_legend=1 00:06:15.152 --rc geninfo_all_blocks=1 00:06:15.152 --rc geninfo_unexecuted_blocks=1 00:06:15.152 00:06:15.152 ' 00:06:15.152 10:48:03 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:15.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.152 --rc genhtml_branch_coverage=1 00:06:15.152 --rc genhtml_function_coverage=1 00:06:15.152 --rc genhtml_legend=1 00:06:15.152 --rc geninfo_all_blocks=1 00:06:15.152 --rc geninfo_unexecuted_blocks=1 00:06:15.152 00:06:15.152 ' 00:06:15.152 10:48:03 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.152 10:48:03 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.152 10:48:03 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.152 10:48:03 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.152 10:48:03 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.152 10:48:03 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:15.152 10:48:03 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:15.152 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:15.152 10:48:03 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:15.152 10:48:03 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:06:15.152 10:48:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:15.152 10:48:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:15.152 10:48:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:15.152 10:48:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:15.152 10:48:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:15.152 10:48:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:15.152 10:48:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:15.152 10:48:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:15.152 10:48:03 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:15.152 10:48:03 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:15.152 INFO: launching applications... 00:06:15.152 10:48:03 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:15.152 10:48:03 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:15.152 10:48:03 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:15.152 10:48:03 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:15.152 10:48:03 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:15.152 10:48:03 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:15.152 10:48:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:15.152 10:48:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:15.153 10:48:03 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1277778 00:06:15.153 10:48:03 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:15.153 Waiting for target to run... 00:06:15.153 10:48:03 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1277778 /var/tmp/spdk_tgt.sock 00:06:15.153 10:48:03 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 1277778 ']' 00:06:15.153 10:48:03 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:15.153 10:48:03 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:15.153 10:48:03 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:15.153 10:48:03 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:15.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:15.153 10:48:03 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:15.153 10:48:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:15.153 [2024-11-15 10:48:04.020030] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:15.153 [2024-11-15 10:48:04.020079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1277778 ] 00:06:15.720 [2024-11-15 10:48:04.454648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.720 [2024-11-15 10:48:04.509418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.978 10:48:04 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:15.978 10:48:04 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:06:15.978 10:48:04 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:15.978 00:06:15.978 10:48:04 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:15.978 INFO: shutting down applications... 00:06:15.978 10:48:04 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:15.978 10:48:04 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:15.979 10:48:04 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:15.979 10:48:04 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1277778 ]] 00:06:15.979 10:48:04 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1277778 00:06:15.979 10:48:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:15.979 10:48:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.979 10:48:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1277778 00:06:16.237 10:48:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:16.497 10:48:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:16.497 10:48:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:16.497 10:48:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1277778 00:06:16.497 10:48:05 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:16.497 10:48:05 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:16.497 10:48:05 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:16.497 10:48:05 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:16.497 SPDK target shutdown done 00:06:16.497 10:48:05 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:16.497 Success 00:06:16.497 00:06:16.497 real 0m1.575s 00:06:16.497 user 0m1.212s 00:06:16.497 sys 0m0.560s 00:06:16.497 10:48:05 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:16.497 10:48:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:16.497 ************************************ 00:06:16.497 END TEST json_config_extra_key 00:06:16.497 ************************************ 00:06:16.756 10:48:05 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:16.756 10:48:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:16.756 10:48:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:16.756 10:48:05 -- common/autotest_common.sh@10 -- # set +x 00:06:16.756 ************************************ 00:06:16.756 START TEST alias_rpc 00:06:16.756 ************************************ 00:06:16.756 10:48:05 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:16.756 * Looking for test storage... 00:06:16.756 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:06:16.756 10:48:05 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:16.756 10:48:05 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:16.756 10:48:05 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:16.756 10:48:05 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.756 10:48:05 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:16.756 10:48:05 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.756 10:48:05 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:16.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.756 --rc genhtml_branch_coverage=1 00:06:16.756 --rc genhtml_function_coverage=1 00:06:16.756 --rc genhtml_legend=1 00:06:16.756 --rc geninfo_all_blocks=1 00:06:16.756 --rc geninfo_unexecuted_blocks=1 00:06:16.756 00:06:16.756 ' 00:06:16.756 10:48:05 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:16.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.756 --rc genhtml_branch_coverage=1 00:06:16.756 --rc genhtml_function_coverage=1 00:06:16.756 --rc genhtml_legend=1 00:06:16.756 --rc geninfo_all_blocks=1 00:06:16.756 --rc geninfo_unexecuted_blocks=1 00:06:16.756 00:06:16.756 ' 00:06:16.756 10:48:05 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:16.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.756 --rc genhtml_branch_coverage=1 00:06:16.756 --rc genhtml_function_coverage=1 00:06:16.756 --rc genhtml_legend=1 00:06:16.756 --rc geninfo_all_blocks=1 00:06:16.756 --rc geninfo_unexecuted_blocks=1 00:06:16.756 00:06:16.756 ' 00:06:16.756 10:48:05 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:16.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.756 --rc genhtml_branch_coverage=1 00:06:16.756 --rc genhtml_function_coverage=1 00:06:16.756 --rc genhtml_legend=1 00:06:16.756 --rc geninfo_all_blocks=1 00:06:16.756 --rc geninfo_unexecuted_blocks=1 00:06:16.756 00:06:16.756 ' 00:06:16.756 10:48:05 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:16.756 10:48:05 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1278073 00:06:16.756 10:48:05 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1278073 00:06:16.756 10:48:05 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:16.756 10:48:05 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 1278073 ']' 00:06:16.756 10:48:05 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.756 10:48:05 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:16.756 10:48:05 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.756 10:48:05 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:16.756 10:48:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.014 [2024-11-15 10:48:05.666132] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:17.014 [2024-11-15 10:48:05.666186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278073 ] 00:06:17.014 [2024-11-15 10:48:05.728687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.014 [2024-11-15 10:48:05.768995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.273 10:48:05 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:17.273 10:48:05 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:17.273 10:48:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:17.533 10:48:06 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1278073 00:06:17.533 10:48:06 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 1278073 ']' 00:06:17.533 10:48:06 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 1278073 00:06:17.533 10:48:06 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:06:17.533 10:48:06 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:17.533 10:48:06 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1278073 00:06:17.533 10:48:06 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:17.533 10:48:06 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:17.533 10:48:06 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1278073' 00:06:17.533 killing process with pid 1278073 00:06:17.533 10:48:06 alias_rpc -- common/autotest_common.sh@971 -- # kill 1278073 00:06:17.533 10:48:06 alias_rpc -- common/autotest_common.sh@976 -- # wait 1278073 00:06:17.791 00:06:17.791 real 0m1.106s 00:06:17.791 user 0m1.134s 00:06:17.791 sys 0m0.400s 00:06:17.791 10:48:06 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:17.791 10:48:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.791 ************************************ 00:06:17.791 END TEST alias_rpc 00:06:17.791 ************************************ 00:06:17.791 10:48:06 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:17.791 10:48:06 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:17.791 10:48:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:17.791 10:48:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:17.791 10:48:06 -- common/autotest_common.sh@10 -- # set +x 00:06:17.791 ************************************ 00:06:17.791 START TEST spdkcli_tcp 00:06:17.791 ************************************ 00:06:17.791 10:48:06 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:18.050 * Looking for test storage... 00:06:18.050 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:06:18.050 10:48:06 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:18.050 10:48:06 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:18.050 10:48:06 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:18.050 10:48:06 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.050 10:48:06 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:18.050 10:48:06 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.050 10:48:06 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:18.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.050 --rc genhtml_branch_coverage=1 00:06:18.050 --rc genhtml_function_coverage=1 00:06:18.050 --rc genhtml_legend=1 00:06:18.050 --rc geninfo_all_blocks=1 00:06:18.050 --rc geninfo_unexecuted_blocks=1 00:06:18.050 00:06:18.050 ' 00:06:18.050 10:48:06 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:18.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.050 --rc genhtml_branch_coverage=1 00:06:18.050 --rc genhtml_function_coverage=1 00:06:18.050 --rc genhtml_legend=1 00:06:18.050 --rc geninfo_all_blocks=1 00:06:18.050 --rc geninfo_unexecuted_blocks=1 00:06:18.050 00:06:18.050 ' 00:06:18.050 10:48:06 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:18.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.050 --rc genhtml_branch_coverage=1 00:06:18.050 --rc genhtml_function_coverage=1 00:06:18.050 --rc genhtml_legend=1 00:06:18.050 --rc geninfo_all_blocks=1 00:06:18.050 --rc geninfo_unexecuted_blocks=1 00:06:18.050 00:06:18.050 ' 00:06:18.050 10:48:06 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:18.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.050 --rc genhtml_branch_coverage=1 00:06:18.050 --rc genhtml_function_coverage=1 00:06:18.050 --rc genhtml_legend=1 00:06:18.050 --rc geninfo_all_blocks=1 00:06:18.050 --rc geninfo_unexecuted_blocks=1 00:06:18.050 00:06:18.050 ' 00:06:18.050 10:48:06 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:06:18.050 10:48:06 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:18.050 10:48:06 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:06:18.050 10:48:06 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:18.050 10:48:06 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:18.050 10:48:06 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:18.050 10:48:06 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:18.050 10:48:06 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:18.050 10:48:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.051 10:48:06 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:18.051 10:48:06 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1278365 00:06:18.051 10:48:06 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1278365 00:06:18.051 10:48:06 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 1278365 ']' 00:06:18.051 10:48:06 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.051 10:48:06 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:18.051 10:48:06 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.051 10:48:06 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:18.051 10:48:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.051 [2024-11-15 10:48:06.826564] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:18.051 [2024-11-15 10:48:06.826611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278365 ] 00:06:18.051 [2024-11-15 10:48:06.888709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.051 [2024-11-15 10:48:06.933841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.051 [2024-11-15 10:48:06.933846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.309 10:48:07 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:18.309 10:48:07 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:06:18.309 10:48:07 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1278375 00:06:18.309 10:48:07 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:18.309 10:48:07 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:18.568 [ 00:06:18.568 "bdev_malloc_delete", 00:06:18.568 "bdev_malloc_create", 00:06:18.568 "bdev_null_resize", 00:06:18.568 "bdev_null_delete", 00:06:18.568 "bdev_null_create", 00:06:18.568 "bdev_nvme_cuse_unregister", 00:06:18.568 "bdev_nvme_cuse_register", 00:06:18.568 "bdev_opal_new_user", 00:06:18.568 "bdev_opal_set_lock_state", 00:06:18.568 "bdev_opal_delete", 00:06:18.568 "bdev_opal_get_info", 00:06:18.568 "bdev_opal_create", 00:06:18.568 "bdev_nvme_opal_revert", 00:06:18.568 "bdev_nvme_opal_init", 00:06:18.568 "bdev_nvme_send_cmd", 00:06:18.568 "bdev_nvme_set_keys", 00:06:18.568 "bdev_nvme_get_path_iostat", 00:06:18.569 "bdev_nvme_get_mdns_discovery_info", 00:06:18.569 "bdev_nvme_stop_mdns_discovery", 00:06:18.569 "bdev_nvme_start_mdns_discovery", 00:06:18.569 "bdev_nvme_set_multipath_policy", 00:06:18.569 "bdev_nvme_set_preferred_path", 00:06:18.569 "bdev_nvme_get_io_paths", 00:06:18.569 "bdev_nvme_remove_error_injection", 00:06:18.569 "bdev_nvme_add_error_injection", 00:06:18.569 "bdev_nvme_get_discovery_info", 00:06:18.569 "bdev_nvme_stop_discovery", 00:06:18.569 "bdev_nvme_start_discovery", 00:06:18.569 "bdev_nvme_get_controller_health_info", 00:06:18.569 "bdev_nvme_disable_controller", 00:06:18.569 "bdev_nvme_enable_controller", 00:06:18.569 "bdev_nvme_reset_controller", 00:06:18.569 "bdev_nvme_get_transport_statistics", 00:06:18.569 "bdev_nvme_apply_firmware", 00:06:18.569 "bdev_nvme_detach_controller", 00:06:18.569 "bdev_nvme_get_controllers", 00:06:18.569 "bdev_nvme_attach_controller", 00:06:18.569 "bdev_nvme_set_hotplug", 00:06:18.569 "bdev_nvme_set_options", 00:06:18.569 "bdev_passthru_delete", 00:06:18.569 "bdev_passthru_create", 00:06:18.569 "bdev_lvol_set_parent_bdev", 00:06:18.569 "bdev_lvol_set_parent", 00:06:18.569 "bdev_lvol_check_shallow_copy", 00:06:18.569 "bdev_lvol_start_shallow_copy", 00:06:18.569 "bdev_lvol_grow_lvstore", 00:06:18.569 "bdev_lvol_get_lvols", 00:06:18.569 "bdev_lvol_get_lvstores", 00:06:18.569 "bdev_lvol_delete", 00:06:18.569 "bdev_lvol_set_read_only", 00:06:18.569 "bdev_lvol_resize", 00:06:18.569 "bdev_lvol_decouple_parent", 00:06:18.569 "bdev_lvol_inflate", 00:06:18.569 "bdev_lvol_rename", 00:06:18.569 "bdev_lvol_clone_bdev", 00:06:18.569 "bdev_lvol_clone", 00:06:18.569 "bdev_lvol_snapshot", 00:06:18.569 "bdev_lvol_create", 00:06:18.569 "bdev_lvol_delete_lvstore", 00:06:18.569 "bdev_lvol_rename_lvstore", 00:06:18.569 "bdev_lvol_create_lvstore", 00:06:18.569 "bdev_raid_set_options", 00:06:18.569 "bdev_raid_remove_base_bdev", 00:06:18.569 "bdev_raid_add_base_bdev", 00:06:18.569 "bdev_raid_delete", 00:06:18.569 "bdev_raid_create", 00:06:18.569 "bdev_raid_get_bdevs", 00:06:18.569 "bdev_error_inject_error", 00:06:18.569 "bdev_error_delete", 00:06:18.569 "bdev_error_create", 00:06:18.569 "bdev_split_delete", 00:06:18.569 "bdev_split_create", 00:06:18.569 "bdev_delay_delete", 00:06:18.569 "bdev_delay_create", 00:06:18.569 "bdev_delay_update_latency", 00:06:18.569 "bdev_zone_block_delete", 00:06:18.569 "bdev_zone_block_create", 00:06:18.569 "blobfs_create", 00:06:18.569 "blobfs_detect", 00:06:18.569 "blobfs_set_cache_size", 00:06:18.569 "bdev_aio_delete", 00:06:18.569 "bdev_aio_rescan", 00:06:18.569 "bdev_aio_create", 00:06:18.569 "bdev_ftl_set_property", 00:06:18.569 "bdev_ftl_get_properties", 00:06:18.569 "bdev_ftl_get_stats", 00:06:18.569 "bdev_ftl_unmap", 00:06:18.569 "bdev_ftl_unload", 00:06:18.569 "bdev_ftl_delete", 00:06:18.569 "bdev_ftl_load", 00:06:18.569 "bdev_ftl_create", 00:06:18.569 "bdev_virtio_attach_controller", 00:06:18.569 "bdev_virtio_scsi_get_devices", 00:06:18.569 "bdev_virtio_detach_controller", 00:06:18.569 "bdev_virtio_blk_set_hotplug", 00:06:18.569 "bdev_iscsi_delete", 00:06:18.569 "bdev_iscsi_create", 00:06:18.569 "bdev_iscsi_set_options", 00:06:18.569 "accel_error_inject_error", 00:06:18.569 "ioat_scan_accel_module", 00:06:18.569 "dsa_scan_accel_module", 00:06:18.569 "iaa_scan_accel_module", 00:06:18.569 "keyring_file_remove_key", 00:06:18.569 "keyring_file_add_key", 00:06:18.569 "keyring_linux_set_options", 00:06:18.569 "fsdev_aio_delete", 00:06:18.569 "fsdev_aio_create", 00:06:18.569 "iscsi_get_histogram", 00:06:18.569 "iscsi_enable_histogram", 00:06:18.569 "iscsi_set_options", 00:06:18.569 "iscsi_get_auth_groups", 00:06:18.569 "iscsi_auth_group_remove_secret", 00:06:18.569 "iscsi_auth_group_add_secret", 00:06:18.569 "iscsi_delete_auth_group", 00:06:18.569 "iscsi_create_auth_group", 00:06:18.569 "iscsi_set_discovery_auth", 00:06:18.569 "iscsi_get_options", 00:06:18.569 "iscsi_target_node_request_logout", 00:06:18.569 "iscsi_target_node_set_redirect", 00:06:18.569 "iscsi_target_node_set_auth", 00:06:18.569 "iscsi_target_node_add_lun", 00:06:18.569 "iscsi_get_stats", 00:06:18.569 "iscsi_get_connections", 00:06:18.569 "iscsi_portal_group_set_auth", 00:06:18.569 "iscsi_start_portal_group", 00:06:18.569 "iscsi_delete_portal_group", 00:06:18.569 "iscsi_create_portal_group", 00:06:18.569 "iscsi_get_portal_groups", 00:06:18.569 "iscsi_delete_target_node", 00:06:18.569 "iscsi_target_node_remove_pg_ig_maps", 00:06:18.569 "iscsi_target_node_add_pg_ig_maps", 00:06:18.569 "iscsi_create_target_node", 00:06:18.569 "iscsi_get_target_nodes", 00:06:18.569 "iscsi_delete_initiator_group", 00:06:18.569 "iscsi_initiator_group_remove_initiators", 00:06:18.569 "iscsi_initiator_group_add_initiators", 00:06:18.569 "iscsi_create_initiator_group", 00:06:18.569 "iscsi_get_initiator_groups", 00:06:18.569 "nvmf_set_crdt", 00:06:18.569 "nvmf_set_config", 00:06:18.569 "nvmf_set_max_subsystems", 00:06:18.569 "nvmf_stop_mdns_prr", 00:06:18.569 "nvmf_publish_mdns_prr", 00:06:18.569 "nvmf_subsystem_get_listeners", 00:06:18.569 "nvmf_subsystem_get_qpairs", 00:06:18.569 "nvmf_subsystem_get_controllers", 00:06:18.569 "nvmf_get_stats", 00:06:18.569 "nvmf_get_transports", 00:06:18.569 "nvmf_create_transport", 00:06:18.569 "nvmf_get_targets", 00:06:18.569 "nvmf_delete_target", 00:06:18.569 "nvmf_create_target", 00:06:18.569 "nvmf_subsystem_allow_any_host", 00:06:18.569 "nvmf_subsystem_set_keys", 00:06:18.569 "nvmf_subsystem_remove_host", 00:06:18.569 "nvmf_subsystem_add_host", 00:06:18.569 "nvmf_ns_remove_host", 00:06:18.569 "nvmf_ns_add_host", 00:06:18.569 "nvmf_subsystem_remove_ns", 00:06:18.569 "nvmf_subsystem_set_ns_ana_group", 00:06:18.569 "nvmf_subsystem_add_ns", 00:06:18.569 "nvmf_subsystem_listener_set_ana_state", 00:06:18.569 "nvmf_discovery_get_referrals", 00:06:18.569 "nvmf_discovery_remove_referral", 00:06:18.569 "nvmf_discovery_add_referral", 00:06:18.569 "nvmf_subsystem_remove_listener", 00:06:18.569 "nvmf_subsystem_add_listener", 00:06:18.569 "nvmf_delete_subsystem", 00:06:18.569 "nvmf_create_subsystem", 00:06:18.569 "nvmf_get_subsystems", 00:06:18.569 "env_dpdk_get_mem_stats", 00:06:18.569 "nbd_get_disks", 00:06:18.569 "nbd_stop_disk", 00:06:18.569 "nbd_start_disk", 00:06:18.569 "ublk_recover_disk", 00:06:18.569 "ublk_get_disks", 00:06:18.569 "ublk_stop_disk", 00:06:18.569 "ublk_start_disk", 00:06:18.569 "ublk_destroy_target", 00:06:18.569 "ublk_create_target", 00:06:18.569 "virtio_blk_create_transport", 00:06:18.569 "virtio_blk_get_transports", 00:06:18.569 "vhost_controller_set_coalescing", 00:06:18.569 "vhost_get_controllers", 00:06:18.569 "vhost_delete_controller", 00:06:18.569 "vhost_create_blk_controller", 00:06:18.569 "vhost_scsi_controller_remove_target", 00:06:18.569 "vhost_scsi_controller_add_target", 00:06:18.569 "vhost_start_scsi_controller", 00:06:18.569 "vhost_create_scsi_controller", 00:06:18.569 "thread_set_cpumask", 00:06:18.569 "scheduler_set_options", 00:06:18.569 "framework_get_governor", 00:06:18.569 "framework_get_scheduler", 00:06:18.569 "framework_set_scheduler", 00:06:18.569 "framework_get_reactors", 00:06:18.569 "thread_get_io_channels", 00:06:18.569 "thread_get_pollers", 00:06:18.569 "thread_get_stats", 00:06:18.569 "framework_monitor_context_switch", 00:06:18.569 "spdk_kill_instance", 00:06:18.569 "log_enable_timestamps", 00:06:18.569 "log_get_flags", 00:06:18.569 "log_clear_flag", 00:06:18.569 "log_set_flag", 00:06:18.569 "log_get_level", 00:06:18.569 "log_set_level", 00:06:18.569 "log_get_print_level", 00:06:18.569 "log_set_print_level", 00:06:18.570 "framework_enable_cpumask_locks", 00:06:18.570 "framework_disable_cpumask_locks", 00:06:18.570 "framework_wait_init", 00:06:18.570 "framework_start_init", 00:06:18.570 "scsi_get_devices", 00:06:18.570 "bdev_get_histogram", 00:06:18.570 "bdev_enable_histogram", 00:06:18.570 "bdev_set_qos_limit", 00:06:18.570 "bdev_set_qd_sampling_period", 00:06:18.570 "bdev_get_bdevs", 00:06:18.570 "bdev_reset_iostat", 00:06:18.570 "bdev_get_iostat", 00:06:18.570 "bdev_examine", 00:06:18.570 "bdev_wait_for_examine", 00:06:18.570 "bdev_set_options", 00:06:18.570 "accel_get_stats", 00:06:18.570 "accel_set_options", 00:06:18.570 "accel_set_driver", 00:06:18.570 "accel_crypto_key_destroy", 00:06:18.570 "accel_crypto_keys_get", 00:06:18.570 "accel_crypto_key_create", 00:06:18.570 "accel_assign_opc", 00:06:18.570 "accel_get_module_info", 00:06:18.570 "accel_get_opc_assignments", 00:06:18.570 "vmd_rescan", 00:06:18.570 "vmd_remove_device", 00:06:18.570 "vmd_enable", 00:06:18.570 "sock_get_default_impl", 00:06:18.570 "sock_set_default_impl", 00:06:18.570 "sock_impl_set_options", 00:06:18.570 "sock_impl_get_options", 00:06:18.570 "iobuf_get_stats", 00:06:18.570 "iobuf_set_options", 00:06:18.570 "keyring_get_keys", 00:06:18.570 "framework_get_pci_devices", 00:06:18.570 "framework_get_config", 00:06:18.570 "framework_get_subsystems", 00:06:18.570 "fsdev_set_opts", 00:06:18.570 "fsdev_get_opts", 00:06:18.570 "trace_get_info", 00:06:18.570 "trace_get_tpoint_group_mask", 00:06:18.570 "trace_disable_tpoint_group", 00:06:18.570 "trace_enable_tpoint_group", 00:06:18.570 "trace_clear_tpoint_mask", 00:06:18.570 "trace_set_tpoint_mask", 00:06:18.570 "notify_get_notifications", 00:06:18.570 "notify_get_types", 00:06:18.570 "spdk_get_version", 00:06:18.570 "rpc_get_methods" 00:06:18.570 ] 00:06:18.570 10:48:07 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:18.570 10:48:07 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:18.570 10:48:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.570 10:48:07 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:18.570 10:48:07 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1278365 00:06:18.570 10:48:07 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 1278365 ']' 00:06:18.570 10:48:07 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 1278365 00:06:18.570 10:48:07 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:06:18.570 10:48:07 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:18.570 10:48:07 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1278365 00:06:18.570 10:48:07 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:18.570 10:48:07 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:18.570 10:48:07 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1278365' 00:06:18.570 killing process with pid 1278365 00:06:18.570 10:48:07 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 1278365 00:06:18.570 10:48:07 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 1278365 00:06:19.137 00:06:19.137 real 0m1.115s 00:06:19.137 user 0m1.903s 00:06:19.137 sys 0m0.442s 00:06:19.137 10:48:07 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:19.137 10:48:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:19.137 ************************************ 00:06:19.137 END TEST spdkcli_tcp 00:06:19.137 ************************************ 00:06:19.137 10:48:07 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:19.137 10:48:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:19.137 10:48:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:19.137 10:48:07 -- common/autotest_common.sh@10 -- # set +x 00:06:19.137 ************************************ 00:06:19.137 START TEST dpdk_mem_utility 00:06:19.137 ************************************ 00:06:19.137 10:48:07 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:19.137 * Looking for test storage... 00:06:19.137 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:06:19.137 10:48:07 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:19.137 10:48:07 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:06:19.137 10:48:07 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:19.137 10:48:07 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.137 10:48:07 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:19.137 10:48:07 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.137 10:48:07 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:19.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.137 --rc genhtml_branch_coverage=1 00:06:19.137 --rc genhtml_function_coverage=1 00:06:19.137 --rc genhtml_legend=1 00:06:19.137 --rc geninfo_all_blocks=1 00:06:19.137 --rc geninfo_unexecuted_blocks=1 00:06:19.137 00:06:19.137 ' 00:06:19.137 10:48:07 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:19.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.137 --rc genhtml_branch_coverage=1 00:06:19.137 --rc genhtml_function_coverage=1 00:06:19.137 --rc genhtml_legend=1 00:06:19.137 --rc geninfo_all_blocks=1 00:06:19.137 --rc geninfo_unexecuted_blocks=1 00:06:19.137 00:06:19.137 ' 00:06:19.137 10:48:07 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:19.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.137 --rc genhtml_branch_coverage=1 00:06:19.137 --rc genhtml_function_coverage=1 00:06:19.137 --rc genhtml_legend=1 00:06:19.137 --rc geninfo_all_blocks=1 00:06:19.137 --rc geninfo_unexecuted_blocks=1 00:06:19.137 00:06:19.137 ' 00:06:19.137 10:48:07 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:19.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.137 --rc genhtml_branch_coverage=1 00:06:19.137 --rc genhtml_function_coverage=1 00:06:19.137 --rc genhtml_legend=1 00:06:19.137 --rc geninfo_all_blocks=1 00:06:19.137 --rc geninfo_unexecuted_blocks=1 00:06:19.137 00:06:19.137 ' 00:06:19.137 10:48:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:19.138 10:48:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1278754 00:06:19.138 10:48:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1278754 00:06:19.138 10:48:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.138 10:48:07 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 1278754 ']' 00:06:19.138 10:48:07 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.138 10:48:07 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:19.138 10:48:07 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.138 10:48:07 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:19.138 10:48:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:19.396 [2024-11-15 10:48:08.026769] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:19.396 [2024-11-15 10:48:08.026821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278754 ] 00:06:19.396 [2024-11-15 10:48:08.090508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.396 [2024-11-15 10:48:08.130835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.657 10:48:08 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:19.657 10:48:08 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:06:19.657 10:48:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:19.657 10:48:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:19.657 10:48:08 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.657 10:48:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:19.657 { 00:06:19.657 "filename": "/tmp/spdk_mem_dump.txt" 00:06:19.657 } 00:06:19.657 10:48:08 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.657 10:48:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:19.657 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:19.657 1 heaps totaling size 818.000000 MiB 00:06:19.657 size: 818.000000 MiB heap id: 0 00:06:19.657 end heaps---------- 00:06:19.657 9 mempools totaling size 603.782043 MiB 00:06:19.657 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:19.657 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:19.657 size: 100.555481 MiB name: bdev_io_1278754 00:06:19.657 size: 50.003479 MiB name: msgpool_1278754 00:06:19.657 size: 36.509338 MiB name: fsdev_io_1278754 00:06:19.657 size: 21.763794 MiB name: PDU_Pool 00:06:19.657 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:19.657 size: 4.133484 MiB name: evtpool_1278754 00:06:19.657 size: 0.026123 MiB name: Session_Pool 00:06:19.657 end mempools------- 00:06:19.657 6 memzones totaling size 4.142822 MiB 00:06:19.657 size: 1.000366 MiB name: RG_ring_0_1278754 00:06:19.657 size: 1.000366 MiB name: RG_ring_1_1278754 00:06:19.657 size: 1.000366 MiB name: RG_ring_4_1278754 00:06:19.657 size: 1.000366 MiB name: RG_ring_5_1278754 00:06:19.657 size: 0.125366 MiB name: RG_ring_2_1278754 00:06:19.657 size: 0.015991 MiB name: RG_ring_3_1278754 00:06:19.657 end memzones------- 00:06:19.657 10:48:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:19.657 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:19.657 list of free elements. size: 10.852478 MiB 00:06:19.657 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:19.657 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:19.657 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:19.657 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:19.657 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:19.657 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:19.657 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:19.657 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:19.657 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:06:19.657 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:19.657 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:19.657 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:19.657 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:19.657 element at address: 0x200028200000 with size: 0.410034 MiB 00:06:19.657 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:19.657 list of standard malloc elements. size: 199.218628 MiB 00:06:19.657 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:19.657 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:19.657 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:19.657 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:19.657 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:19.657 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:19.657 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:19.657 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:19.657 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:19.657 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:19.657 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:19.657 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:19.657 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:19.657 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:19.657 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:19.657 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:19.657 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:19.657 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:19.657 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:19.657 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:19.657 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:19.657 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:19.657 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:19.657 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:19.657 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:19.657 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:19.657 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:19.657 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:19.657 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:19.657 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:19.657 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:19.657 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:19.657 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:19.657 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:19.657 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:19.657 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:19.657 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:19.657 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:19.657 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:19.657 element at address: 0x200028268f80 with size: 0.000183 MiB 00:06:19.657 element at address: 0x200028269040 with size: 0.000183 MiB 00:06:19.658 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:06:19.658 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:19.658 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:19.658 list of memzone associated elements. size: 607.928894 MiB 00:06:19.658 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:19.658 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:19.658 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:19.658 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:19.658 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:19.658 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1278754_0 00:06:19.658 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:19.658 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1278754_0 00:06:19.658 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:19.658 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1278754_0 00:06:19.658 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:19.658 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:19.658 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:19.658 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:19.658 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:19.658 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1278754_0 00:06:19.658 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:19.658 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1278754 00:06:19.658 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:19.658 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1278754 00:06:19.658 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:19.658 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:19.658 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:19.658 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:19.658 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:19.658 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:19.658 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:19.658 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:19.658 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:19.658 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1278754 00:06:19.658 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:19.658 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1278754 00:06:19.658 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:19.658 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1278754 00:06:19.658 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:19.658 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1278754 00:06:19.658 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:19.658 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1278754 00:06:19.658 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:19.658 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1278754 00:06:19.658 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:19.658 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:19.658 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:19.658 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:19.658 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:19.658 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:19.658 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:19.658 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1278754 00:06:19.658 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:19.658 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1278754 00:06:19.658 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:19.658 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:19.658 element at address: 0x200028269100 with size: 0.023743 MiB 00:06:19.658 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:19.658 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:19.658 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1278754 00:06:19.658 element at address: 0x20002826f240 with size: 0.002441 MiB 00:06:19.658 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:19.658 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:19.658 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1278754 00:06:19.658 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:19.658 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1278754 00:06:19.658 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:19.658 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1278754 00:06:19.658 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:06:19.658 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:19.658 10:48:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:19.658 10:48:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1278754 00:06:19.658 10:48:08 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 1278754 ']' 00:06:19.658 10:48:08 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 1278754 00:06:19.658 10:48:08 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:06:19.658 10:48:08 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:19.658 10:48:08 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1278754 00:06:19.658 10:48:08 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:19.658 10:48:08 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:19.658 10:48:08 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1278754' 00:06:19.658 killing process with pid 1278754 00:06:19.658 10:48:08 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 1278754 00:06:19.658 10:48:08 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 1278754 00:06:20.226 00:06:20.226 real 0m1.005s 00:06:20.226 user 0m0.970s 00:06:20.226 sys 0m0.386s 00:06:20.226 10:48:08 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:20.226 10:48:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:20.226 ************************************ 00:06:20.226 END TEST dpdk_mem_utility 00:06:20.226 ************************************ 00:06:20.226 10:48:08 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:20.226 10:48:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:20.226 10:48:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:20.226 10:48:08 -- common/autotest_common.sh@10 -- # set +x 00:06:20.226 ************************************ 00:06:20.226 START TEST event 00:06:20.226 ************************************ 00:06:20.226 10:48:08 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:20.226 * Looking for test storage... 00:06:20.226 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:20.226 10:48:08 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:20.226 10:48:08 event -- common/autotest_common.sh@1691 -- # lcov --version 00:06:20.226 10:48:08 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:20.226 10:48:09 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:20.226 10:48:09 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.226 10:48:09 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.226 10:48:09 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.226 10:48:09 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.226 10:48:09 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.226 10:48:09 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.226 10:48:09 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.226 10:48:09 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.226 10:48:09 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.226 10:48:09 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.226 10:48:09 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.226 10:48:09 event -- scripts/common.sh@344 -- # case "$op" in 00:06:20.226 10:48:09 event -- scripts/common.sh@345 -- # : 1 00:06:20.226 10:48:09 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.226 10:48:09 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.226 10:48:09 event -- scripts/common.sh@365 -- # decimal 1 00:06:20.226 10:48:09 event -- scripts/common.sh@353 -- # local d=1 00:06:20.226 10:48:09 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.226 10:48:09 event -- scripts/common.sh@355 -- # echo 1 00:06:20.226 10:48:09 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.226 10:48:09 event -- scripts/common.sh@366 -- # decimal 2 00:06:20.226 10:48:09 event -- scripts/common.sh@353 -- # local d=2 00:06:20.226 10:48:09 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.226 10:48:09 event -- scripts/common.sh@355 -- # echo 2 00:06:20.226 10:48:09 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.226 10:48:09 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.226 10:48:09 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.226 10:48:09 event -- scripts/common.sh@368 -- # return 0 00:06:20.226 10:48:09 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.226 10:48:09 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:20.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.226 --rc genhtml_branch_coverage=1 00:06:20.226 --rc genhtml_function_coverage=1 00:06:20.226 --rc genhtml_legend=1 00:06:20.226 --rc geninfo_all_blocks=1 00:06:20.226 --rc geninfo_unexecuted_blocks=1 00:06:20.226 00:06:20.226 ' 00:06:20.226 10:48:09 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:20.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.226 --rc genhtml_branch_coverage=1 00:06:20.226 --rc genhtml_function_coverage=1 00:06:20.226 --rc genhtml_legend=1 00:06:20.226 --rc geninfo_all_blocks=1 00:06:20.226 --rc geninfo_unexecuted_blocks=1 00:06:20.226 00:06:20.226 ' 00:06:20.226 10:48:09 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:20.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.226 --rc genhtml_branch_coverage=1 00:06:20.226 --rc genhtml_function_coverage=1 00:06:20.226 --rc genhtml_legend=1 00:06:20.226 --rc geninfo_all_blocks=1 00:06:20.226 --rc geninfo_unexecuted_blocks=1 00:06:20.226 00:06:20.226 ' 00:06:20.226 10:48:09 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:20.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.226 --rc genhtml_branch_coverage=1 00:06:20.226 --rc genhtml_function_coverage=1 00:06:20.226 --rc genhtml_legend=1 00:06:20.226 --rc geninfo_all_blocks=1 00:06:20.226 --rc geninfo_unexecuted_blocks=1 00:06:20.226 00:06:20.226 ' 00:06:20.227 10:48:09 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:20.227 10:48:09 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:20.227 10:48:09 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:20.227 10:48:09 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:20.227 10:48:09 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:20.227 10:48:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.227 ************************************ 00:06:20.227 START TEST event_perf 00:06:20.227 ************************************ 00:06:20.227 10:48:09 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:20.227 Running I/O for 1 seconds...[2024-11-15 10:48:09.108661] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:20.227 [2024-11-15 10:48:09.108728] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279262 ] 00:06:20.485 [2024-11-15 10:48:09.175243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.485 [2024-11-15 10:48:09.220675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.485 [2024-11-15 10:48:09.220773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.485 [2024-11-15 10:48:09.221024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.485 [2024-11-15 10:48:09.221027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.420 Running I/O for 1 seconds... 00:06:21.420 lcore 0: 205449 00:06:21.420 lcore 1: 205446 00:06:21.420 lcore 2: 205446 00:06:21.420 lcore 3: 205448 00:06:21.420 done. 00:06:21.420 00:06:21.420 real 0m1.178s 00:06:21.420 user 0m4.105s 00:06:21.420 sys 0m0.067s 00:06:21.420 10:48:10 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:21.420 10:48:10 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.420 ************************************ 00:06:21.420 END TEST event_perf 00:06:21.420 ************************************ 00:06:21.420 10:48:10 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:21.420 10:48:10 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:21.420 10:48:10 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:21.420 10:48:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.679 ************************************ 00:06:21.679 START TEST event_reactor 00:06:21.679 ************************************ 00:06:21.679 10:48:10 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:21.679 [2024-11-15 10:48:10.359743] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:21.679 [2024-11-15 10:48:10.359812] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279563 ] 00:06:21.679 [2024-11-15 10:48:10.427289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.679 [2024-11-15 10:48:10.468309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.631 test_start 00:06:22.631 oneshot 00:06:22.631 tick 100 00:06:22.631 tick 100 00:06:22.631 tick 250 00:06:22.631 tick 100 00:06:22.631 tick 100 00:06:22.631 tick 250 00:06:22.631 tick 100 00:06:22.631 tick 500 00:06:22.631 tick 100 00:06:22.631 tick 100 00:06:22.631 tick 250 00:06:22.631 tick 100 00:06:22.631 tick 100 00:06:22.631 test_end 00:06:22.631 00:06:22.631 real 0m1.168s 00:06:22.631 user 0m1.097s 00:06:22.631 sys 0m0.068s 00:06:22.631 10:48:11 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:22.631 10:48:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:22.631 ************************************ 00:06:22.631 END TEST event_reactor 00:06:22.631 ************************************ 00:06:22.890 10:48:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:22.890 10:48:11 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:22.890 10:48:11 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:22.890 10:48:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.890 ************************************ 00:06:22.890 START TEST event_reactor_perf 00:06:22.890 ************************************ 00:06:22.890 10:48:11 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:22.890 [2024-11-15 10:48:11.598083] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:22.890 [2024-11-15 10:48:11.598156] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279722 ] 00:06:22.890 [2024-11-15 10:48:11.662258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.890 [2024-11-15 10:48:11.703630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.266 test_start 00:06:24.266 test_end 00:06:24.266 Performance: 503220 events per second 00:06:24.266 00:06:24.266 real 0m1.165s 00:06:24.266 user 0m1.096s 00:06:24.266 sys 0m0.065s 00:06:24.266 10:48:12 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:24.266 10:48:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.266 ************************************ 00:06:24.266 END TEST event_reactor_perf 00:06:24.266 ************************************ 00:06:24.266 10:48:12 event -- event/event.sh@49 -- # uname -s 00:06:24.266 10:48:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:24.266 10:48:12 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:24.266 10:48:12 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:24.266 10:48:12 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:24.266 10:48:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.266 ************************************ 00:06:24.266 START TEST event_scheduler 00:06:24.266 ************************************ 00:06:24.266 10:48:12 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:24.266 * Looking for test storage... 00:06:24.266 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:24.266 10:48:12 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:24.266 10:48:12 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:06:24.266 10:48:12 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:24.266 10:48:12 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.266 10:48:12 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:24.266 10:48:12 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.266 10:48:12 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:24.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.266 --rc genhtml_branch_coverage=1 00:06:24.266 --rc genhtml_function_coverage=1 00:06:24.266 --rc genhtml_legend=1 00:06:24.266 --rc geninfo_all_blocks=1 00:06:24.266 --rc geninfo_unexecuted_blocks=1 00:06:24.266 00:06:24.266 ' 00:06:24.266 10:48:12 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:24.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.266 --rc genhtml_branch_coverage=1 00:06:24.266 --rc genhtml_function_coverage=1 00:06:24.266 --rc genhtml_legend=1 00:06:24.266 --rc geninfo_all_blocks=1 00:06:24.266 --rc geninfo_unexecuted_blocks=1 00:06:24.266 00:06:24.266 ' 00:06:24.266 10:48:12 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:24.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.266 --rc genhtml_branch_coverage=1 00:06:24.266 --rc genhtml_function_coverage=1 00:06:24.266 --rc genhtml_legend=1 00:06:24.266 --rc geninfo_all_blocks=1 00:06:24.266 --rc geninfo_unexecuted_blocks=1 00:06:24.266 00:06:24.266 ' 00:06:24.266 10:48:12 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:24.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.266 --rc genhtml_branch_coverage=1 00:06:24.266 --rc genhtml_function_coverage=1 00:06:24.266 --rc genhtml_legend=1 00:06:24.266 --rc geninfo_all_blocks=1 00:06:24.266 --rc geninfo_unexecuted_blocks=1 00:06:24.266 00:06:24.266 ' 00:06:24.266 10:48:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:24.266 10:48:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1280016 00:06:24.266 10:48:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:24.266 10:48:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.266 10:48:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1280016 00:06:24.266 10:48:12 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 1280016 ']' 00:06:24.266 10:48:12 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.266 10:48:12 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:24.266 10:48:12 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.267 10:48:12 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:24.267 10:48:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.267 [2024-11-15 10:48:13.007742] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:24.267 [2024-11-15 10:48:13.007790] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1280016 ] 00:06:24.267 [2024-11-15 10:48:13.067278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.267 [2024-11-15 10:48:13.111085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.267 [2024-11-15 10:48:13.111180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.267 [2024-11-15 10:48:13.111228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.267 [2024-11-15 10:48:13.111230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.526 10:48:13 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:24.526 10:48:13 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:06:24.526 10:48:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:24.526 10:48:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.526 10:48:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.526 [2024-11-15 10:48:13.179853] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:24.526 [2024-11-15 10:48:13.179873] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:24.526 [2024-11-15 10:48:13.179881] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:24.526 [2024-11-15 10:48:13.179887] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:24.526 [2024-11-15 10:48:13.179894] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:24.526 10:48:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.526 10:48:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:24.526 10:48:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.526 10:48:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.526 [2024-11-15 10:48:13.254931] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:24.526 10:48:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.526 10:48:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:24.526 10:48:13 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:24.526 10:48:13 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:24.526 10:48:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.526 ************************************ 00:06:24.526 START TEST scheduler_create_thread 00:06:24.526 ************************************ 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.526 2 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.526 3 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.526 4 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.526 5 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.526 6 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.526 7 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.526 8 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.526 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.526 9 00:06:24.527 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.527 10:48:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:24.527 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.527 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.527 10 00:06:24.527 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.527 10:48:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:24.527 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.527 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.527 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.527 10:48:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:24.527 10:48:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:24.527 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.527 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.093 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.093 10:48:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:25.093 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.093 10:48:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.472 10:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.472 10:48:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:26.472 10:48:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:26.472 10:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.472 10:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.847 10:48:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.847 00:06:27.847 real 0m3.102s 00:06:27.847 user 0m0.026s 00:06:27.847 sys 0m0.003s 00:06:27.847 10:48:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:27.847 10:48:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.847 ************************************ 00:06:27.847 END TEST scheduler_create_thread 00:06:27.847 ************************************ 00:06:27.847 10:48:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:27.847 10:48:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1280016 00:06:27.847 10:48:16 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 1280016 ']' 00:06:27.847 10:48:16 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 1280016 00:06:27.847 10:48:16 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:06:27.847 10:48:16 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:27.847 10:48:16 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1280016 00:06:27.847 10:48:16 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:27.847 10:48:16 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:27.847 10:48:16 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1280016' 00:06:27.847 killing process with pid 1280016 00:06:27.847 10:48:16 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 1280016 00:06:27.847 10:48:16 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 1280016 00:06:28.105 [2024-11-15 10:48:16.770139] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:28.105 00:06:28.105 real 0m4.142s 00:06:28.105 user 0m6.719s 00:06:28.105 sys 0m0.345s 00:06:28.105 10:48:16 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:28.105 10:48:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.105 ************************************ 00:06:28.105 END TEST event_scheduler 00:06:28.105 ************************************ 00:06:28.105 10:48:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:28.363 10:48:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:28.363 10:48:16 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:28.363 10:48:16 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:28.363 10:48:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.363 ************************************ 00:06:28.363 START TEST app_repeat 00:06:28.363 ************************************ 00:06:28.363 10:48:17 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:06:28.363 10:48:17 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.363 10:48:17 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.363 10:48:17 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:28.363 10:48:17 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.363 10:48:17 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:28.363 10:48:17 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:28.363 10:48:17 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:28.363 10:48:17 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1280723 00:06:28.363 10:48:17 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:28.363 10:48:17 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:28.363 10:48:17 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1280723' 00:06:28.363 Process app_repeat pid: 1280723 00:06:28.363 10:48:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:28.363 10:48:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:28.363 spdk_app_start Round 0 00:06:28.363 10:48:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1280723 /var/tmp/spdk-nbd.sock 00:06:28.363 10:48:17 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1280723 ']' 00:06:28.363 10:48:17 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.363 10:48:17 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:28.364 10:48:17 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.364 10:48:17 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:28.364 10:48:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.364 [2024-11-15 10:48:17.072394] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:28.364 [2024-11-15 10:48:17.072447] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1280723 ] 00:06:28.364 [2024-11-15 10:48:17.137234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.364 [2024-11-15 10:48:17.182375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.364 [2024-11-15 10:48:17.182378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.621 10:48:17 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:28.621 10:48:17 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:28.621 10:48:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.621 Malloc0 00:06:28.621 10:48:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.880 Malloc1 00:06:28.880 10:48:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.880 10:48:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.880 10:48:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.880 10:48:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:28.880 10:48:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.880 10:48:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:28.880 10:48:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.880 10:48:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.880 10:48:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.880 10:48:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.880 10:48:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.880 10:48:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.880 10:48:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:28.880 10:48:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.880 10:48:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.880 10:48:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:29.138 /dev/nbd0 00:06:29.138 10:48:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.138 10:48:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.138 10:48:17 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:29.138 10:48:17 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:29.138 10:48:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:29.138 10:48:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:29.138 10:48:17 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:29.138 10:48:17 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:29.138 10:48:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:29.138 10:48:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:29.138 10:48:17 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.138 1+0 records in 00:06:29.138 1+0 records out 00:06:29.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237259 s, 17.3 MB/s 00:06:29.138 10:48:17 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:29.138 10:48:17 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:29.138 10:48:17 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:29.138 10:48:17 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:29.138 10:48:17 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:29.138 10:48:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.138 10:48:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.138 10:48:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:29.396 /dev/nbd1 00:06:29.396 10:48:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:29.396 10:48:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:29.396 10:48:18 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:29.396 10:48:18 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:29.396 10:48:18 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:29.396 10:48:18 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:29.396 10:48:18 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:29.396 10:48:18 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:29.396 10:48:18 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:29.396 10:48:18 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:29.396 10:48:18 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.396 1+0 records in 00:06:29.396 1+0 records out 00:06:29.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208339 s, 19.7 MB/s 00:06:29.396 10:48:18 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:29.396 10:48:18 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:29.396 10:48:18 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:29.396 10:48:18 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:29.396 10:48:18 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:29.396 10:48:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.396 10:48:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.396 10:48:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.396 10:48:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.396 10:48:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:29.655 { 00:06:29.655 "nbd_device": "/dev/nbd0", 00:06:29.655 "bdev_name": "Malloc0" 00:06:29.655 }, 00:06:29.655 { 00:06:29.655 "nbd_device": "/dev/nbd1", 00:06:29.655 "bdev_name": "Malloc1" 00:06:29.655 } 00:06:29.655 ]' 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:29.655 { 00:06:29.655 "nbd_device": "/dev/nbd0", 00:06:29.655 "bdev_name": "Malloc0" 00:06:29.655 }, 00:06:29.655 { 00:06:29.655 "nbd_device": "/dev/nbd1", 00:06:29.655 "bdev_name": "Malloc1" 00:06:29.655 } 00:06:29.655 ]' 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:29.655 /dev/nbd1' 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:29.655 /dev/nbd1' 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:29.655 256+0 records in 00:06:29.655 256+0 records out 00:06:29.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105614 s, 99.3 MB/s 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:29.655 256+0 records in 00:06:29.655 256+0 records out 00:06:29.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140831 s, 74.5 MB/s 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:29.655 256+0 records in 00:06:29.655 256+0 records out 00:06:29.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145168 s, 72.2 MB/s 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.655 10:48:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:29.914 10:48:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:29.914 10:48:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:29.914 10:48:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:29.914 10:48:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.914 10:48:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.914 10:48:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:29.914 10:48:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.914 10:48:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.914 10:48:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.914 10:48:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.173 10:48:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.173 10:48:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.173 10:48:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.173 10:48:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.173 10:48:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.173 10:48:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.173 10:48:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.173 10:48:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.173 10:48:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.173 10:48:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.173 10:48:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.431 10:48:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.431 10:48:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.431 10:48:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.431 10:48:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.431 10:48:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.431 10:48:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.431 10:48:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:30.431 10:48:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.431 10:48:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.431 10:48:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:30.431 10:48:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:30.431 10:48:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:30.431 10:48:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:30.690 10:48:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:30.690 [2024-11-15 10:48:19.524493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.690 [2024-11-15 10:48:19.561212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.690 [2024-11-15 10:48:19.561216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.948 [2024-11-15 10:48:19.602286] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:30.948 [2024-11-15 10:48:19.602324] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:34.234 10:48:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:34.234 10:48:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:34.234 spdk_app_start Round 1 00:06:34.234 10:48:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1280723 /var/tmp/spdk-nbd.sock 00:06:34.234 10:48:22 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1280723 ']' 00:06:34.234 10:48:22 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.234 10:48:22 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:34.234 10:48:22 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.234 10:48:22 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:34.234 10:48:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:34.234 10:48:22 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:34.234 10:48:22 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:34.234 10:48:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.234 Malloc0 00:06:34.234 10:48:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.234 Malloc1 00:06:34.234 10:48:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.234 10:48:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.234 10:48:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.234 10:48:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:34.234 10:48:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.234 10:48:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:34.234 10:48:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.234 10:48:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.234 10:48:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.234 10:48:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:34.234 10:48:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.234 10:48:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:34.234 10:48:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:34.234 10:48:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:34.234 10:48:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.234 10:48:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:34.493 /dev/nbd0 00:06:34.493 10:48:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:34.493 10:48:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:34.493 10:48:23 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:34.493 10:48:23 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:34.493 10:48:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:34.493 10:48:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:34.493 10:48:23 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:34.493 10:48:23 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:34.493 10:48:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:34.493 10:48:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:34.493 10:48:23 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.493 1+0 records in 00:06:34.493 1+0 records out 00:06:34.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192595 s, 21.3 MB/s 00:06:34.493 10:48:23 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:34.493 10:48:23 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:34.493 10:48:23 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:34.493 10:48:23 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:34.493 10:48:23 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:34.493 10:48:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.493 10:48:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.493 10:48:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:34.752 /dev/nbd1 00:06:34.752 10:48:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:34.752 10:48:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:34.752 10:48:23 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:34.752 10:48:23 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:34.752 10:48:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:34.752 10:48:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:34.752 10:48:23 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:34.752 10:48:23 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:34.752 10:48:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:34.752 10:48:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:34.752 10:48:23 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.752 1+0 records in 00:06:34.752 1+0 records out 00:06:34.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000155684 s, 26.3 MB/s 00:06:34.753 10:48:23 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:34.753 10:48:23 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:34.753 10:48:23 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:34.753 10:48:23 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:34.753 10:48:23 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:34.753 10:48:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.753 10:48:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.753 10:48:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.753 10:48:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.753 10:48:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:35.011 { 00:06:35.011 "nbd_device": "/dev/nbd0", 00:06:35.011 "bdev_name": "Malloc0" 00:06:35.011 }, 00:06:35.011 { 00:06:35.011 "nbd_device": "/dev/nbd1", 00:06:35.011 "bdev_name": "Malloc1" 00:06:35.011 } 00:06:35.011 ]' 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:35.011 { 00:06:35.011 "nbd_device": "/dev/nbd0", 00:06:35.011 "bdev_name": "Malloc0" 00:06:35.011 }, 00:06:35.011 { 00:06:35.011 "nbd_device": "/dev/nbd1", 00:06:35.011 "bdev_name": "Malloc1" 00:06:35.011 } 00:06:35.011 ]' 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:35.011 /dev/nbd1' 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:35.011 /dev/nbd1' 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:35.011 256+0 records in 00:06:35.011 256+0 records out 00:06:35.011 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00983301 s, 107 MB/s 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:35.011 256+0 records in 00:06:35.011 256+0 records out 00:06:35.011 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139369 s, 75.2 MB/s 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:35.011 256+0 records in 00:06:35.011 256+0 records out 00:06:35.011 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146609 s, 71.5 MB/s 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.011 10:48:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:35.269 10:48:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:35.269 10:48:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:35.269 10:48:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:35.269 10:48:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.269 10:48:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.269 10:48:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:35.269 10:48:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.269 10:48:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.269 10:48:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.269 10:48:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:35.528 10:48:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:35.528 10:48:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:35.528 10:48:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:35.528 10:48:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.528 10:48:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.528 10:48:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:35.528 10:48:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.528 10:48:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.528 10:48:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.528 10:48:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.528 10:48:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.528 10:48:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:35.528 10:48:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:35.528 10:48:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.788 10:48:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:35.788 10:48:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.788 10:48:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:35.788 10:48:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:35.788 10:48:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:35.788 10:48:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:35.788 10:48:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:35.788 10:48:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:35.788 10:48:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:35.788 10:48:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:35.788 10:48:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:36.046 [2024-11-15 10:48:24.798805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:36.046 [2024-11-15 10:48:24.836082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.046 [2024-11-15 10:48:24.836086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.046 [2024-11-15 10:48:24.877654] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:36.046 [2024-11-15 10:48:24.877692] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:39.415 10:48:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:39.415 10:48:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:39.415 spdk_app_start Round 2 00:06:39.415 10:48:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1280723 /var/tmp/spdk-nbd.sock 00:06:39.415 10:48:27 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1280723 ']' 00:06:39.415 10:48:27 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:39.415 10:48:27 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:39.415 10:48:27 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:39.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:39.415 10:48:27 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:39.415 10:48:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:39.415 10:48:27 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:39.415 10:48:27 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:39.415 10:48:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.415 Malloc0 00:06:39.415 10:48:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.415 Malloc1 00:06:39.415 10:48:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.415 10:48:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.415 10:48:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.415 10:48:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:39.415 10:48:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.415 10:48:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:39.415 10:48:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.415 10:48:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.415 10:48:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.415 10:48:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:39.415 10:48:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.415 10:48:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:39.415 10:48:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:39.415 10:48:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:39.415 10:48:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.415 10:48:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:39.706 /dev/nbd0 00:06:39.706 10:48:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.706 10:48:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.706 10:48:28 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:39.706 10:48:28 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:39.706 10:48:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:39.706 10:48:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:39.706 10:48:28 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:39.706 10:48:28 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:39.706 10:48:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:39.706 10:48:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:39.706 10:48:28 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.706 1+0 records in 00:06:39.706 1+0 records out 00:06:39.706 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228914 s, 17.9 MB/s 00:06:39.706 10:48:28 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:39.706 10:48:28 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:39.706 10:48:28 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:39.706 10:48:28 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:39.706 10:48:28 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:39.706 10:48:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.706 10:48:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.706 10:48:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:39.965 /dev/nbd1 00:06:39.965 10:48:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:39.965 10:48:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:39.965 10:48:28 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:39.965 10:48:28 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:39.965 10:48:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:39.965 10:48:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:39.965 10:48:28 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:39.965 10:48:28 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:39.965 10:48:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:39.965 10:48:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:39.965 10:48:28 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.965 1+0 records in 00:06:39.965 1+0 records out 00:06:39.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212159 s, 19.3 MB/s 00:06:39.965 10:48:28 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:39.965 10:48:28 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:39.965 10:48:28 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:39.965 10:48:28 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:39.965 10:48:28 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:39.965 10:48:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.965 10:48:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.965 10:48:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.965 10:48:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.965 10:48:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.224 10:48:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:40.224 { 00:06:40.224 "nbd_device": "/dev/nbd0", 00:06:40.224 "bdev_name": "Malloc0" 00:06:40.224 }, 00:06:40.224 { 00:06:40.224 "nbd_device": "/dev/nbd1", 00:06:40.224 "bdev_name": "Malloc1" 00:06:40.224 } 00:06:40.224 ]' 00:06:40.224 10:48:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:40.224 { 00:06:40.224 "nbd_device": "/dev/nbd0", 00:06:40.224 "bdev_name": "Malloc0" 00:06:40.224 }, 00:06:40.224 { 00:06:40.224 "nbd_device": "/dev/nbd1", 00:06:40.224 "bdev_name": "Malloc1" 00:06:40.224 } 00:06:40.224 ]' 00:06:40.224 10:48:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.224 10:48:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:40.224 /dev/nbd1' 00:06:40.224 10:48:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:40.224 /dev/nbd1' 00:06:40.224 10:48:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.224 10:48:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:40.224 10:48:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:40.224 10:48:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:40.224 10:48:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:40.224 10:48:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:40.224 10:48:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.224 10:48:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.224 10:48:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:40.224 10:48:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.224 10:48:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:40.224 10:48:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:40.224 256+0 records in 00:06:40.224 256+0 records out 00:06:40.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106738 s, 98.2 MB/s 00:06:40.224 10:48:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.224 10:48:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:40.224 256+0 records in 00:06:40.224 256+0 records out 00:06:40.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139583 s, 75.1 MB/s 00:06:40.224 10:48:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.224 10:48:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:40.224 256+0 records in 00:06:40.224 256+0 records out 00:06:40.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149383 s, 70.2 MB/s 00:06:40.224 10:48:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:40.224 10:48:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.224 10:48:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.224 10:48:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:40.224 10:48:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.224 10:48:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:40.224 10:48:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:40.224 10:48:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.224 10:48:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:40.224 10:48:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.224 10:48:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:40.224 10:48:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.224 10:48:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:40.224 10:48:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.224 10:48:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.224 10:48:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.224 10:48:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:40.224 10:48:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.224 10:48:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:40.483 10:48:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.483 10:48:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.483 10:48:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.483 10:48:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.483 10:48:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.483 10:48:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.483 10:48:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.483 10:48:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.483 10:48:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.483 10:48:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:40.741 10:48:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:40.741 10:48:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:40.741 10:48:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:40.742 10:48:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.742 10:48:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.742 10:48:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:40.742 10:48:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.742 10:48:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.742 10:48:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.742 10:48:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.742 10:48:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.001 10:48:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:41.001 10:48:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:41.001 10:48:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.001 10:48:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:41.001 10:48:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:41.001 10:48:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.001 10:48:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:41.001 10:48:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:41.001 10:48:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:41.001 10:48:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:41.001 10:48:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:41.001 10:48:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:41.001 10:48:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:41.259 10:48:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:41.259 [2024-11-15 10:48:30.064180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.259 [2024-11-15 10:48:30.107913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.259 [2024-11-15 10:48:30.107918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.518 [2024-11-15 10:48:30.149396] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:41.518 [2024-11-15 10:48:30.149430] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:44.118 10:48:32 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1280723 /var/tmp/spdk-nbd.sock 00:06:44.118 10:48:32 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1280723 ']' 00:06:44.118 10:48:32 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:44.118 10:48:32 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:44.118 10:48:32 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:44.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:44.118 10:48:32 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:44.118 10:48:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:44.378 10:48:33 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:44.378 10:48:33 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:44.378 10:48:33 event.app_repeat -- event/event.sh@39 -- # killprocess 1280723 00:06:44.378 10:48:33 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 1280723 ']' 00:06:44.378 10:48:33 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 1280723 00:06:44.378 10:48:33 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:06:44.378 10:48:33 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:44.378 10:48:33 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1280723 00:06:44.378 10:48:33 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:44.378 10:48:33 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:44.378 10:48:33 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1280723' 00:06:44.378 killing process with pid 1280723 00:06:44.378 10:48:33 event.app_repeat -- common/autotest_common.sh@971 -- # kill 1280723 00:06:44.378 10:48:33 event.app_repeat -- common/autotest_common.sh@976 -- # wait 1280723 00:06:44.638 spdk_app_start is called in Round 0. 00:06:44.638 Shutdown signal received, stop current app iteration 00:06:44.638 Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 reinitialization... 00:06:44.638 spdk_app_start is called in Round 1. 00:06:44.638 Shutdown signal received, stop current app iteration 00:06:44.638 Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 reinitialization... 00:06:44.638 spdk_app_start is called in Round 2. 00:06:44.638 Shutdown signal received, stop current app iteration 00:06:44.638 Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 reinitialization... 00:06:44.638 spdk_app_start is called in Round 3. 00:06:44.638 Shutdown signal received, stop current app iteration 00:06:44.638 10:48:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:44.638 10:48:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:44.638 00:06:44.638 real 0m16.249s 00:06:44.638 user 0m35.621s 00:06:44.638 sys 0m2.554s 00:06:44.638 10:48:33 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:44.638 10:48:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:44.638 ************************************ 00:06:44.638 END TEST app_repeat 00:06:44.638 ************************************ 00:06:44.638 10:48:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:44.638 10:48:33 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:44.638 10:48:33 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:44.638 10:48:33 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:44.638 10:48:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.638 ************************************ 00:06:44.638 START TEST cpu_locks 00:06:44.638 ************************************ 00:06:44.638 10:48:33 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:44.638 * Looking for test storage... 00:06:44.638 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:44.638 10:48:33 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:44.638 10:48:33 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:44.638 10:48:33 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:44.638 10:48:33 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:44.638 10:48:33 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.638 10:48:33 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.638 10:48:33 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.638 10:48:33 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.638 10:48:33 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.638 10:48:33 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.638 10:48:33 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.638 10:48:33 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.638 10:48:33 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.638 10:48:33 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.638 10:48:33 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.638 10:48:33 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:44.638 10:48:33 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:44.638 10:48:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.638 10:48:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.638 10:48:33 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:44.638 10:48:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:44.639 10:48:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.639 10:48:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:44.639 10:48:33 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.639 10:48:33 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:44.639 10:48:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:44.639 10:48:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.639 10:48:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:44.639 10:48:33 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.639 10:48:33 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.639 10:48:33 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.639 10:48:33 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:44.639 10:48:33 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.639 10:48:33 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:44.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.639 --rc genhtml_branch_coverage=1 00:06:44.639 --rc genhtml_function_coverage=1 00:06:44.639 --rc genhtml_legend=1 00:06:44.639 --rc geninfo_all_blocks=1 00:06:44.639 --rc geninfo_unexecuted_blocks=1 00:06:44.639 00:06:44.639 ' 00:06:44.639 10:48:33 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:44.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.639 --rc genhtml_branch_coverage=1 00:06:44.639 --rc genhtml_function_coverage=1 00:06:44.639 --rc genhtml_legend=1 00:06:44.639 --rc geninfo_all_blocks=1 00:06:44.639 --rc geninfo_unexecuted_blocks=1 00:06:44.639 00:06:44.639 ' 00:06:44.639 10:48:33 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:44.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.639 --rc genhtml_branch_coverage=1 00:06:44.639 --rc genhtml_function_coverage=1 00:06:44.639 --rc genhtml_legend=1 00:06:44.639 --rc geninfo_all_blocks=1 00:06:44.639 --rc geninfo_unexecuted_blocks=1 00:06:44.639 00:06:44.639 ' 00:06:44.639 10:48:33 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:44.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.639 --rc genhtml_branch_coverage=1 00:06:44.639 --rc genhtml_function_coverage=1 00:06:44.639 --rc genhtml_legend=1 00:06:44.639 --rc geninfo_all_blocks=1 00:06:44.639 --rc geninfo_unexecuted_blocks=1 00:06:44.639 00:06:44.639 ' 00:06:44.639 10:48:33 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:44.639 10:48:33 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:44.639 10:48:33 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:44.639 10:48:33 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:44.639 10:48:33 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:44.639 10:48:33 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:44.639 10:48:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.897 ************************************ 00:06:44.897 START TEST default_locks 00:06:44.897 ************************************ 00:06:44.897 10:48:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:06:44.897 10:48:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1283729 00:06:44.897 10:48:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1283729 00:06:44.897 10:48:33 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 1283729 ']' 00:06:44.897 10:48:33 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.897 10:48:33 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:44.897 10:48:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.897 10:48:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:44.897 10:48:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.897 10:48:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.897 [2024-11-15 10:48:33.597979] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:44.897 [2024-11-15 10:48:33.598024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1283729 ] 00:06:44.897 [2024-11-15 10:48:33.659073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.897 [2024-11-15 10:48:33.701469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.156 10:48:33 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:45.156 10:48:33 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:06:45.156 10:48:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1283729 00:06:45.156 10:48:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1283729 00:06:45.156 10:48:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.723 lslocks: write error 00:06:45.723 10:48:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1283729 00:06:45.723 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 1283729 ']' 00:06:45.723 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 1283729 00:06:45.723 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:06:45.723 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:45.723 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1283729 00:06:45.723 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:45.723 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:45.723 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1283729' 00:06:45.723 killing process with pid 1283729 00:06:45.723 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 1283729 00:06:45.723 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 1283729 00:06:45.982 10:48:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1283729 00:06:45.982 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:45.982 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1283729 00:06:45.982 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:45.982 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.982 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:45.982 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.982 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1283729 00:06:45.982 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 1283729 ']' 00:06:45.982 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.982 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:45.982 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.982 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:45.982 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.982 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (1283729) - No such process 00:06:45.982 ERROR: process (pid: 1283729) is no longer running 00:06:45.982 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:45.982 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:06:45.982 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:45.982 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:45.982 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:45.982 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:45.982 10:48:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:45.982 10:48:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:45.983 10:48:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:45.983 10:48:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:45.983 00:06:45.983 real 0m1.202s 00:06:45.983 user 0m1.163s 00:06:45.983 sys 0m0.549s 00:06:45.983 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:45.983 10:48:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.983 ************************************ 00:06:45.983 END TEST default_locks 00:06:45.983 ************************************ 00:06:45.983 10:48:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:45.983 10:48:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:45.983 10:48:34 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:45.983 10:48:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.983 ************************************ 00:06:45.983 START TEST default_locks_via_rpc 00:06:45.983 ************************************ 00:06:45.983 10:48:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:06:45.983 10:48:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1283987 00:06:45.983 10:48:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1283987 00:06:45.983 10:48:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1283987 ']' 00:06:45.983 10:48:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.983 10:48:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:45.983 10:48:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.983 10:48:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:45.983 10:48:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.983 10:48:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.983 [2024-11-15 10:48:34.862540] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:45.983 [2024-11-15 10:48:34.862583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1283987 ] 00:06:46.242 [2024-11-15 10:48:34.923149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.242 [2024-11-15 10:48:34.965781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.501 10:48:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:46.501 10:48:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:46.501 10:48:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:46.501 10:48:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.501 10:48:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.501 10:48:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.501 10:48:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:46.501 10:48:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:46.501 10:48:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:46.501 10:48:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:46.501 10:48:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:46.501 10:48:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.501 10:48:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.501 10:48:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.501 10:48:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1283987 00:06:46.501 10:48:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1283987 00:06:46.501 10:48:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.760 10:48:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1283987 00:06:46.760 10:48:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 1283987 ']' 00:06:46.760 10:48:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 1283987 00:06:46.760 10:48:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:06:46.760 10:48:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:46.760 10:48:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1283987 00:06:46.760 10:48:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:46.760 10:48:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:46.760 10:48:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1283987' 00:06:46.760 killing process with pid 1283987 00:06:46.760 10:48:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 1283987 00:06:46.760 10:48:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 1283987 00:06:47.018 00:06:47.018 real 0m1.085s 00:06:47.018 user 0m1.052s 00:06:47.018 sys 0m0.489s 00:06:47.018 10:48:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:47.018 10:48:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.018 ************************************ 00:06:47.019 END TEST default_locks_via_rpc 00:06:47.019 ************************************ 00:06:47.277 10:48:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:47.277 10:48:35 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:47.277 10:48:35 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:47.277 10:48:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.277 ************************************ 00:06:47.277 START TEST non_locking_app_on_locked_coremask 00:06:47.277 ************************************ 00:06:47.277 10:48:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:06:47.277 10:48:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1284251 00:06:47.277 10:48:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1284251 /var/tmp/spdk.sock 00:06:47.277 10:48:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1284251 ']' 00:06:47.277 10:48:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.277 10:48:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:47.277 10:48:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.277 10:48:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:47.277 10:48:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.277 10:48:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.277 [2024-11-15 10:48:36.007848] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:47.278 [2024-11-15 10:48:36.007890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1284251 ] 00:06:47.278 [2024-11-15 10:48:36.067832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.278 [2024-11-15 10:48:36.110143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.537 10:48:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:47.537 10:48:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:47.537 10:48:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1284265 00:06:47.537 10:48:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1284265 /var/tmp/spdk2.sock 00:06:47.537 10:48:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1284265 ']' 00:06:47.537 10:48:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.537 10:48:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:47.537 10:48:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:47.537 10:48:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.537 10:48:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:47.537 10:48:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.537 [2024-11-15 10:48:36.358205] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:47.537 [2024-11-15 10:48:36.358257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1284265 ] 00:06:47.795 [2024-11-15 10:48:36.450042] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.795 [2024-11-15 10:48:36.450069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.795 [2024-11-15 10:48:36.539249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.363 10:48:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:48.363 10:48:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:48.363 10:48:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1284251 00:06:48.363 10:48:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1284251 00:06:48.363 10:48:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.297 lslocks: write error 00:06:49.297 10:48:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1284251 00:06:49.297 10:48:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1284251 ']' 00:06:49.297 10:48:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 1284251 00:06:49.297 10:48:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:49.297 10:48:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:49.297 10:48:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1284251 00:06:49.297 10:48:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:49.297 10:48:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:49.297 10:48:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1284251' 00:06:49.297 killing process with pid 1284251 00:06:49.297 10:48:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 1284251 00:06:49.297 10:48:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 1284251 00:06:49.863 10:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1284265 00:06:49.863 10:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1284265 ']' 00:06:49.863 10:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 1284265 00:06:49.863 10:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:49.863 10:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:49.863 10:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1284265 00:06:49.863 10:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:49.863 10:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:49.863 10:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1284265' 00:06:49.863 killing process with pid 1284265 00:06:49.863 10:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 1284265 00:06:49.863 10:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 1284265 00:06:50.121 00:06:50.121 real 0m2.866s 00:06:50.121 user 0m3.038s 00:06:50.121 sys 0m0.939s 00:06:50.121 10:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:50.121 10:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.121 ************************************ 00:06:50.121 END TEST non_locking_app_on_locked_coremask 00:06:50.121 ************************************ 00:06:50.121 10:48:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:50.121 10:48:38 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:50.121 10:48:38 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.121 10:48:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.121 ************************************ 00:06:50.121 START TEST locking_app_on_unlocked_coremask 00:06:50.121 ************************************ 00:06:50.121 10:48:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:50.121 10:48:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:50.121 10:48:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1284751 00:06:50.121 10:48:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1284751 /var/tmp/spdk.sock 00:06:50.121 10:48:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1284751 ']' 00:06:50.121 10:48:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.121 10:48:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:50.121 10:48:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.121 10:48:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:50.121 10:48:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.121 [2024-11-15 10:48:38.935344] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:50.121 [2024-11-15 10:48:38.935383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1284751 ] 00:06:50.121 [2024-11-15 10:48:38.999853] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.121 [2024-11-15 10:48:38.999879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.380 [2024-11-15 10:48:39.038435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.380 10:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:50.380 10:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:50.380 10:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1284856 00:06:50.380 10:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1284856 /var/tmp/spdk2.sock 00:06:50.380 10:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:50.380 10:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1284856 ']' 00:06:50.380 10:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.380 10:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:50.380 10:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.380 10:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:50.380 10:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.638 [2024-11-15 10:48:39.300646] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:50.638 [2024-11-15 10:48:39.300693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1284856 ] 00:06:50.638 [2024-11-15 10:48:39.389865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.638 [2024-11-15 10:48:39.471061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.572 10:48:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:51.572 10:48:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:51.572 10:48:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1284856 00:06:51.572 10:48:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1284856 00:06:51.572 10:48:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.138 lslocks: write error 00:06:52.138 10:48:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1284751 00:06:52.138 10:48:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1284751 ']' 00:06:52.138 10:48:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 1284751 00:06:52.138 10:48:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:52.138 10:48:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:52.138 10:48:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1284751 00:06:52.138 10:48:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:52.138 10:48:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:52.138 10:48:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1284751' 00:06:52.138 killing process with pid 1284751 00:06:52.138 10:48:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 1284751 00:06:52.138 10:48:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 1284751 00:06:52.707 10:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1284856 00:06:52.707 10:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1284856 ']' 00:06:52.707 10:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 1284856 00:06:52.707 10:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:52.707 10:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:52.707 10:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1284856 00:06:52.707 10:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:52.707 10:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:52.707 10:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1284856' 00:06:52.707 killing process with pid 1284856 00:06:52.707 10:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 1284856 00:06:52.707 10:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 1284856 00:06:52.965 00:06:52.965 real 0m2.921s 00:06:52.965 user 0m3.081s 00:06:52.965 sys 0m0.976s 00:06:52.965 10:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:52.965 10:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.965 ************************************ 00:06:52.965 END TEST locking_app_on_unlocked_coremask 00:06:52.965 ************************************ 00:06:52.965 10:48:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:52.965 10:48:41 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:52.965 10:48:41 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:52.965 10:48:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.223 ************************************ 00:06:53.223 START TEST locking_app_on_locked_coremask 00:06:53.223 ************************************ 00:06:53.223 10:48:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:06:53.223 10:48:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1285280 00:06:53.223 10:48:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1285280 /var/tmp/spdk.sock 00:06:53.223 10:48:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1285280 ']' 00:06:53.223 10:48:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.223 10:48:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:53.223 10:48:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.223 10:48:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:53.223 10:48:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.223 10:48:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.223 [2024-11-15 10:48:41.923611] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:53.223 [2024-11-15 10:48:41.923651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285280 ] 00:06:53.223 [2024-11-15 10:48:41.985964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.223 [2024-11-15 10:48:42.028692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.482 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:53.482 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:53.482 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1285485 00:06:53.482 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1285485 /var/tmp/spdk2.sock 00:06:53.482 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:53.482 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1285485 /var/tmp/spdk2.sock 00:06:53.482 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:53.482 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:53.482 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.482 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:53.482 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.482 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1285485 /var/tmp/spdk2.sock 00:06:53.482 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1285485 ']' 00:06:53.482 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.482 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:53.482 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.482 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:53.482 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.482 [2024-11-15 10:48:42.283057] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:53.482 [2024-11-15 10:48:42.283105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285485 ] 00:06:53.740 [2024-11-15 10:48:42.370438] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1285280 has claimed it. 00:06:53.740 [2024-11-15 10:48:42.370471] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:54.305 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (1285485) - No such process 00:06:54.305 ERROR: process (pid: 1285485) is no longer running 00:06:54.305 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:54.305 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:54.305 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:54.305 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.305 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:54.305 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.305 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1285280 00:06:54.305 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1285280 00:06:54.305 10:48:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.563 lslocks: write error 00:06:54.563 10:48:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1285280 00:06:54.563 10:48:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1285280 ']' 00:06:54.563 10:48:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 1285280 00:06:54.563 10:48:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:54.563 10:48:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:54.822 10:48:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1285280 00:06:54.822 10:48:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:54.822 10:48:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:54.822 10:48:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1285280' 00:06:54.822 killing process with pid 1285280 00:06:54.822 10:48:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 1285280 00:06:54.822 10:48:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 1285280 00:06:55.081 00:06:55.081 real 0m1.922s 00:06:55.081 user 0m2.046s 00:06:55.081 sys 0m0.626s 00:06:55.081 10:48:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:55.081 10:48:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.081 ************************************ 00:06:55.081 END TEST locking_app_on_locked_coremask 00:06:55.081 ************************************ 00:06:55.081 10:48:43 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:55.081 10:48:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:55.081 10:48:43 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:55.081 10:48:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.081 ************************************ 00:06:55.081 START TEST locking_overlapped_coremask 00:06:55.081 ************************************ 00:06:55.081 10:48:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:06:55.081 10:48:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1285749 00:06:55.081 10:48:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1285749 /var/tmp/spdk.sock 00:06:55.081 10:48:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:55.081 10:48:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 1285749 ']' 00:06:55.081 10:48:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.081 10:48:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:55.081 10:48:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.081 10:48:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:55.081 10:48:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.081 [2024-11-15 10:48:43.916922] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:55.081 [2024-11-15 10:48:43.916965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285749 ] 00:06:55.340 [2024-11-15 10:48:43.979431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.340 [2024-11-15 10:48:44.020029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.340 [2024-11-15 10:48:44.020137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.340 [2024-11-15 10:48:44.020145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.597 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:55.597 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:55.597 10:48:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1285764 00:06:55.597 10:48:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1285764 /var/tmp/spdk2.sock 00:06:55.598 10:48:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:55.598 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:55.598 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1285764 /var/tmp/spdk2.sock 00:06:55.598 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:55.598 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.598 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:55.598 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.598 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1285764 /var/tmp/spdk2.sock 00:06:55.598 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 1285764 ']' 00:06:55.598 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.598 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:55.598 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.598 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:55.598 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.598 [2024-11-15 10:48:44.278614] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:55.598 [2024-11-15 10:48:44.278661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285764 ] 00:06:55.598 [2024-11-15 10:48:44.368866] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1285749 has claimed it. 00:06:55.598 [2024-11-15 10:48:44.368906] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:56.164 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (1285764) - No such process 00:06:56.164 ERROR: process (pid: 1285764) is no longer running 00:06:56.164 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:56.164 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:56.164 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:56.164 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:56.164 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:56.164 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:56.164 10:48:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:56.164 10:48:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:56.164 10:48:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:56.164 10:48:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:56.164 10:48:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1285749 00:06:56.164 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 1285749 ']' 00:06:56.164 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 1285749 00:06:56.164 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:06:56.164 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:56.164 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1285749 00:06:56.164 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:56.164 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:56.164 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1285749' 00:06:56.164 killing process with pid 1285749 00:06:56.164 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 1285749 00:06:56.165 10:48:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 1285749 00:06:56.423 00:06:56.423 real 0m1.415s 00:06:56.423 user 0m3.910s 00:06:56.423 sys 0m0.385s 00:06:56.423 10:48:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:56.423 10:48:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.423 ************************************ 00:06:56.423 END TEST locking_overlapped_coremask 00:06:56.423 ************************************ 00:06:56.681 10:48:45 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:56.681 10:48:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:56.681 10:48:45 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.681 10:48:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.681 ************************************ 00:06:56.681 START TEST locking_overlapped_coremask_via_rpc 00:06:56.681 ************************************ 00:06:56.681 10:48:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:06:56.681 10:48:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1286023 00:06:56.682 10:48:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1286023 /var/tmp/spdk.sock 00:06:56.682 10:48:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1286023 ']' 00:06:56.682 10:48:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.682 10:48:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:56.682 10:48:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.682 10:48:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:56.682 10:48:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.682 10:48:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:56.682 [2024-11-15 10:48:45.391926] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:56.682 [2024-11-15 10:48:45.391965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1286023 ] 00:06:56.682 [2024-11-15 10:48:45.453689] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.682 [2024-11-15 10:48:45.453713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.682 [2024-11-15 10:48:45.499243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.682 [2024-11-15 10:48:45.499343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.682 [2024-11-15 10:48:45.499346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.941 10:48:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:56.941 10:48:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:56.941 10:48:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1286034 00:06:56.941 10:48:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1286034 /var/tmp/spdk2.sock 00:06:56.941 10:48:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:56.941 10:48:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1286034 ']' 00:06:56.941 10:48:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.941 10:48:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:56.941 10:48:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.941 10:48:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:56.941 10:48:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.941 [2024-11-15 10:48:45.756033] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:56.941 [2024-11-15 10:48:45.756082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1286034 ] 00:06:57.200 [2024-11-15 10:48:45.848280] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.200 [2024-11-15 10:48:45.848305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.200 [2024-11-15 10:48:45.936337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.200 [2024-11-15 10:48:45.940206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.200 [2024-11-15 10:48:45.940207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.768 [2024-11-15 10:48:46.610236] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1286023 has claimed it. 00:06:57.768 request: 00:06:57.768 { 00:06:57.768 "method": "framework_enable_cpumask_locks", 00:06:57.768 "req_id": 1 00:06:57.768 } 00:06:57.768 Got JSON-RPC error response 00:06:57.768 response: 00:06:57.768 { 00:06:57.768 "code": -32603, 00:06:57.768 "message": "Failed to claim CPU core: 2" 00:06:57.768 } 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1286023 /var/tmp/spdk.sock 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1286023 ']' 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.768 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:57.769 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.027 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:58.027 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:58.027 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1286034 /var/tmp/spdk2.sock 00:06:58.027 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1286034 ']' 00:06:58.027 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.027 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:58.027 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.027 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:58.027 10:48:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.285 10:48:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:58.285 10:48:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:58.285 10:48:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:58.285 10:48:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:58.285 10:48:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:58.285 10:48:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:58.285 00:06:58.285 real 0m1.694s 00:06:58.285 user 0m0.817s 00:06:58.285 sys 0m0.130s 00:06:58.285 10:48:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:58.285 10:48:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.285 ************************************ 00:06:58.285 END TEST locking_overlapped_coremask_via_rpc 00:06:58.285 ************************************ 00:06:58.285 10:48:47 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:58.285 10:48:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1286023 ]] 00:06:58.285 10:48:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1286023 00:06:58.285 10:48:47 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1286023 ']' 00:06:58.285 10:48:47 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1286023 00:06:58.285 10:48:47 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:58.285 10:48:47 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:58.286 10:48:47 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1286023 00:06:58.286 10:48:47 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:58.286 10:48:47 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:58.286 10:48:47 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1286023' 00:06:58.286 killing process with pid 1286023 00:06:58.286 10:48:47 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 1286023 00:06:58.286 10:48:47 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 1286023 00:06:58.544 10:48:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1286034 ]] 00:06:58.544 10:48:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1286034 00:06:58.544 10:48:47 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1286034 ']' 00:06:58.544 10:48:47 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1286034 00:06:58.544 10:48:47 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:58.803 10:48:47 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:58.803 10:48:47 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1286034 00:06:58.803 10:48:47 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:58.803 10:48:47 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:58.803 10:48:47 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1286034' 00:06:58.803 killing process with pid 1286034 00:06:58.803 10:48:47 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 1286034 00:06:58.803 10:48:47 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 1286034 00:06:59.062 10:48:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:59.062 10:48:47 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:59.062 10:48:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1286023 ]] 00:06:59.062 10:48:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1286023 00:06:59.062 10:48:47 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1286023 ']' 00:06:59.062 10:48:47 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1286023 00:06:59.062 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1286023) - No such process 00:06:59.062 10:48:47 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 1286023 is not found' 00:06:59.062 Process with pid 1286023 is not found 00:06:59.062 10:48:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1286034 ]] 00:06:59.062 10:48:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1286034 00:06:59.062 10:48:47 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1286034 ']' 00:06:59.062 10:48:47 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1286034 00:06:59.062 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1286034) - No such process 00:06:59.062 10:48:47 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 1286034 is not found' 00:06:59.062 Process with pid 1286034 is not found 00:06:59.062 10:48:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:59.062 00:06:59.062 real 0m14.441s 00:06:59.062 user 0m24.811s 00:06:59.062 sys 0m5.007s 00:06:59.062 10:48:47 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:59.062 10:48:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.062 ************************************ 00:06:59.062 END TEST cpu_locks 00:06:59.062 ************************************ 00:06:59.062 00:06:59.062 real 0m38.949s 00:06:59.062 user 1m13.725s 00:06:59.062 sys 0m8.477s 00:06:59.062 10:48:47 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:59.062 10:48:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.062 ************************************ 00:06:59.062 END TEST event 00:06:59.062 ************************************ 00:06:59.062 10:48:47 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:59.062 10:48:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:59.062 10:48:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:59.062 10:48:47 -- common/autotest_common.sh@10 -- # set +x 00:06:59.062 ************************************ 00:06:59.062 START TEST thread 00:06:59.062 ************************************ 00:06:59.062 10:48:47 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:59.320 * Looking for test storage... 00:06:59.320 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:59.320 10:48:47 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:59.320 10:48:47 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:59.320 10:48:47 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:59.320 10:48:48 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:59.320 10:48:48 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.320 10:48:48 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.320 10:48:48 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.320 10:48:48 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.320 10:48:48 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.320 10:48:48 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.320 10:48:48 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.320 10:48:48 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.320 10:48:48 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.320 10:48:48 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.320 10:48:48 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.320 10:48:48 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:59.320 10:48:48 thread -- scripts/common.sh@345 -- # : 1 00:06:59.320 10:48:48 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.320 10:48:48 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.320 10:48:48 thread -- scripts/common.sh@365 -- # decimal 1 00:06:59.320 10:48:48 thread -- scripts/common.sh@353 -- # local d=1 00:06:59.320 10:48:48 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.320 10:48:48 thread -- scripts/common.sh@355 -- # echo 1 00:06:59.320 10:48:48 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.320 10:48:48 thread -- scripts/common.sh@366 -- # decimal 2 00:06:59.320 10:48:48 thread -- scripts/common.sh@353 -- # local d=2 00:06:59.320 10:48:48 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.320 10:48:48 thread -- scripts/common.sh@355 -- # echo 2 00:06:59.320 10:48:48 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.320 10:48:48 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.320 10:48:48 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.320 10:48:48 thread -- scripts/common.sh@368 -- # return 0 00:06:59.320 10:48:48 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.320 10:48:48 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:59.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.320 --rc genhtml_branch_coverage=1 00:06:59.320 --rc genhtml_function_coverage=1 00:06:59.320 --rc genhtml_legend=1 00:06:59.320 --rc geninfo_all_blocks=1 00:06:59.320 --rc geninfo_unexecuted_blocks=1 00:06:59.320 00:06:59.320 ' 00:06:59.320 10:48:48 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:59.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.320 --rc genhtml_branch_coverage=1 00:06:59.320 --rc genhtml_function_coverage=1 00:06:59.320 --rc genhtml_legend=1 00:06:59.320 --rc geninfo_all_blocks=1 00:06:59.320 --rc geninfo_unexecuted_blocks=1 00:06:59.320 00:06:59.320 ' 00:06:59.320 10:48:48 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:59.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.320 --rc genhtml_branch_coverage=1 00:06:59.320 --rc genhtml_function_coverage=1 00:06:59.320 --rc genhtml_legend=1 00:06:59.320 --rc geninfo_all_blocks=1 00:06:59.320 --rc geninfo_unexecuted_blocks=1 00:06:59.320 00:06:59.320 ' 00:06:59.320 10:48:48 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:59.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.320 --rc genhtml_branch_coverage=1 00:06:59.320 --rc genhtml_function_coverage=1 00:06:59.320 --rc genhtml_legend=1 00:06:59.320 --rc geninfo_all_blocks=1 00:06:59.320 --rc geninfo_unexecuted_blocks=1 00:06:59.320 00:06:59.320 ' 00:06:59.320 10:48:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:59.320 10:48:48 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:59.320 10:48:48 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:59.320 10:48:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.320 ************************************ 00:06:59.320 START TEST thread_poller_perf 00:06:59.320 ************************************ 00:06:59.320 10:48:48 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:59.320 [2024-11-15 10:48:48.107672] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:06:59.320 [2024-11-15 10:48:48.107743] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1286597 ] 00:06:59.320 [2024-11-15 10:48:48.171425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.577 [2024-11-15 10:48:48.212685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.577 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:00.513 [2024-11-15T09:48:49.397Z] ====================================== 00:07:00.513 [2024-11-15T09:48:49.397Z] busy:2306084466 (cyc) 00:07:00.513 [2024-11-15T09:48:49.397Z] total_run_count: 400000 00:07:00.513 [2024-11-15T09:48:49.397Z] tsc_hz: 2300000000 (cyc) 00:07:00.513 [2024-11-15T09:48:49.397Z] ====================================== 00:07:00.513 [2024-11-15T09:48:49.397Z] poller_cost: 5765 (cyc), 2506 (nsec) 00:07:00.513 00:07:00.513 real 0m1.171s 00:07:00.513 user 0m1.107s 00:07:00.513 sys 0m0.061s 00:07:00.513 10:48:49 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:00.513 10:48:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:00.513 ************************************ 00:07:00.513 END TEST thread_poller_perf 00:07:00.513 ************************************ 00:07:00.513 10:48:49 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:00.513 10:48:49 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:00.513 10:48:49 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:00.513 10:48:49 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.513 ************************************ 00:07:00.513 START TEST thread_poller_perf 00:07:00.513 ************************************ 00:07:00.513 10:48:49 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:00.513 [2024-11-15 10:48:49.352415] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:07:00.513 [2024-11-15 10:48:49.352476] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1286855 ] 00:07:00.772 [2024-11-15 10:48:49.419084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.772 [2024-11-15 10:48:49.460625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.772 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:01.707 [2024-11-15T09:48:50.591Z] ====================================== 00:07:01.707 [2024-11-15T09:48:50.591Z] busy:2301743812 (cyc) 00:07:01.707 [2024-11-15T09:48:50.591Z] total_run_count: 5315000 00:07:01.707 [2024-11-15T09:48:50.591Z] tsc_hz: 2300000000 (cyc) 00:07:01.707 [2024-11-15T09:48:50.591Z] ====================================== 00:07:01.707 [2024-11-15T09:48:50.591Z] poller_cost: 433 (cyc), 188 (nsec) 00:07:01.707 00:07:01.707 real 0m1.171s 00:07:01.707 user 0m1.103s 00:07:01.707 sys 0m0.064s 00:07:01.707 10:48:50 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.707 10:48:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.707 ************************************ 00:07:01.707 END TEST thread_poller_perf 00:07:01.707 ************************************ 00:07:01.707 10:48:50 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:01.707 00:07:01.707 real 0m2.647s 00:07:01.707 user 0m2.358s 00:07:01.707 sys 0m0.303s 00:07:01.707 10:48:50 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.707 10:48:50 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.707 ************************************ 00:07:01.707 END TEST thread 00:07:01.707 ************************************ 00:07:01.707 10:48:50 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:01.707 10:48:50 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:01.707 10:48:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:01.707 10:48:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.707 10:48:50 -- common/autotest_common.sh@10 -- # set +x 00:07:01.966 ************************************ 00:07:01.966 START TEST app_cmdline 00:07:01.966 ************************************ 00:07:01.966 10:48:50 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:01.966 * Looking for test storage... 00:07:01.966 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:01.966 10:48:50 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:01.966 10:48:50 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:07:01.966 10:48:50 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:01.966 10:48:50 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:01.966 10:48:50 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.966 10:48:50 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.967 10:48:50 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:01.967 10:48:50 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.967 10:48:50 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:01.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.967 --rc genhtml_branch_coverage=1 00:07:01.967 --rc genhtml_function_coverage=1 00:07:01.967 --rc genhtml_legend=1 00:07:01.967 --rc geninfo_all_blocks=1 00:07:01.967 --rc geninfo_unexecuted_blocks=1 00:07:01.967 00:07:01.967 ' 00:07:01.967 10:48:50 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:01.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.967 --rc genhtml_branch_coverage=1 00:07:01.967 --rc genhtml_function_coverage=1 00:07:01.967 --rc genhtml_legend=1 00:07:01.967 --rc geninfo_all_blocks=1 00:07:01.967 --rc geninfo_unexecuted_blocks=1 00:07:01.967 00:07:01.967 ' 00:07:01.967 10:48:50 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:01.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.967 --rc genhtml_branch_coverage=1 00:07:01.967 --rc genhtml_function_coverage=1 00:07:01.967 --rc genhtml_legend=1 00:07:01.967 --rc geninfo_all_blocks=1 00:07:01.967 --rc geninfo_unexecuted_blocks=1 00:07:01.967 00:07:01.967 ' 00:07:01.967 10:48:50 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:01.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.967 --rc genhtml_branch_coverage=1 00:07:01.967 --rc genhtml_function_coverage=1 00:07:01.967 --rc genhtml_legend=1 00:07:01.967 --rc geninfo_all_blocks=1 00:07:01.967 --rc geninfo_unexecuted_blocks=1 00:07:01.967 00:07:01.967 ' 00:07:01.967 10:48:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:01.967 10:48:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1287151 00:07:01.967 10:48:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1287151 00:07:01.967 10:48:50 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:01.967 10:48:50 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 1287151 ']' 00:07:01.967 10:48:50 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.967 10:48:50 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:01.967 10:48:50 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.967 10:48:50 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:01.967 10:48:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.967 [2024-11-15 10:48:50.836528] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:07:01.967 [2024-11-15 10:48:50.836576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1287151 ] 00:07:02.228 [2024-11-15 10:48:50.898770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.228 [2024-11-15 10:48:50.941924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.487 10:48:51 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:02.487 10:48:51 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:07:02.487 10:48:51 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:02.487 { 00:07:02.487 "version": "SPDK v25.01-pre git sha1 30279d1cf", 00:07:02.487 "fields": { 00:07:02.487 "major": 25, 00:07:02.487 "minor": 1, 00:07:02.487 "patch": 0, 00:07:02.487 "suffix": "-pre", 00:07:02.487 "commit": "30279d1cf" 00:07:02.487 } 00:07:02.487 } 00:07:02.488 10:48:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:02.488 10:48:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:02.488 10:48:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:02.488 10:48:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:02.488 10:48:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:02.488 10:48:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:02.488 10:48:51 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.488 10:48:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:02.488 10:48:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.488 10:48:51 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.747 10:48:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:02.747 10:48:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:02.747 10:48:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.747 10:48:51 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:02.747 10:48:51 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.747 10:48:51 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:02.747 10:48:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.747 10:48:51 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:02.747 10:48:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.747 10:48:51 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:02.747 10:48:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.747 10:48:51 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:02.747 10:48:51 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:02.747 10:48:51 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.747 request: 00:07:02.747 { 00:07:02.747 "method": "env_dpdk_get_mem_stats", 00:07:02.747 "req_id": 1 00:07:02.747 } 00:07:02.747 Got JSON-RPC error response 00:07:02.747 response: 00:07:02.747 { 00:07:02.747 "code": -32601, 00:07:02.747 "message": "Method not found" 00:07:02.747 } 00:07:02.747 10:48:51 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:02.747 10:48:51 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.747 10:48:51 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.747 10:48:51 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.747 10:48:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1287151 00:07:02.747 10:48:51 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 1287151 ']' 00:07:02.747 10:48:51 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 1287151 00:07:02.747 10:48:51 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:07:02.747 10:48:51 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:02.747 10:48:51 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1287151 00:07:03.006 10:48:51 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:03.006 10:48:51 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:03.006 10:48:51 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1287151' 00:07:03.006 killing process with pid 1287151 00:07:03.006 10:48:51 app_cmdline -- common/autotest_common.sh@971 -- # kill 1287151 00:07:03.006 10:48:51 app_cmdline -- common/autotest_common.sh@976 -- # wait 1287151 00:07:03.266 00:07:03.266 real 0m1.338s 00:07:03.266 user 0m1.569s 00:07:03.266 sys 0m0.443s 00:07:03.266 10:48:51 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:03.266 10:48:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:03.266 ************************************ 00:07:03.266 END TEST app_cmdline 00:07:03.266 ************************************ 00:07:03.266 10:48:51 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:03.266 10:48:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:03.266 10:48:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:03.266 10:48:51 -- common/autotest_common.sh@10 -- # set +x 00:07:03.266 ************************************ 00:07:03.266 START TEST version 00:07:03.266 ************************************ 00:07:03.266 10:48:52 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:03.266 * Looking for test storage... 00:07:03.266 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:03.266 10:48:52 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:03.266 10:48:52 version -- common/autotest_common.sh@1691 -- # lcov --version 00:07:03.266 10:48:52 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:03.525 10:48:52 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:03.526 10:48:52 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.526 10:48:52 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.526 10:48:52 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.526 10:48:52 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.526 10:48:52 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.526 10:48:52 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.526 10:48:52 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.526 10:48:52 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.526 10:48:52 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.526 10:48:52 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.526 10:48:52 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.526 10:48:52 version -- scripts/common.sh@344 -- # case "$op" in 00:07:03.526 10:48:52 version -- scripts/common.sh@345 -- # : 1 00:07:03.526 10:48:52 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.526 10:48:52 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.526 10:48:52 version -- scripts/common.sh@365 -- # decimal 1 00:07:03.526 10:48:52 version -- scripts/common.sh@353 -- # local d=1 00:07:03.526 10:48:52 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.526 10:48:52 version -- scripts/common.sh@355 -- # echo 1 00:07:03.526 10:48:52 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.526 10:48:52 version -- scripts/common.sh@366 -- # decimal 2 00:07:03.526 10:48:52 version -- scripts/common.sh@353 -- # local d=2 00:07:03.526 10:48:52 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.526 10:48:52 version -- scripts/common.sh@355 -- # echo 2 00:07:03.526 10:48:52 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.526 10:48:52 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.526 10:48:52 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.526 10:48:52 version -- scripts/common.sh@368 -- # return 0 00:07:03.526 10:48:52 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.526 10:48:52 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:03.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.526 --rc genhtml_branch_coverage=1 00:07:03.526 --rc genhtml_function_coverage=1 00:07:03.526 --rc genhtml_legend=1 00:07:03.526 --rc geninfo_all_blocks=1 00:07:03.526 --rc geninfo_unexecuted_blocks=1 00:07:03.526 00:07:03.526 ' 00:07:03.526 10:48:52 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:03.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.526 --rc genhtml_branch_coverage=1 00:07:03.526 --rc genhtml_function_coverage=1 00:07:03.526 --rc genhtml_legend=1 00:07:03.526 --rc geninfo_all_blocks=1 00:07:03.526 --rc geninfo_unexecuted_blocks=1 00:07:03.526 00:07:03.526 ' 00:07:03.526 10:48:52 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:03.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.526 --rc genhtml_branch_coverage=1 00:07:03.526 --rc genhtml_function_coverage=1 00:07:03.526 --rc genhtml_legend=1 00:07:03.526 --rc geninfo_all_blocks=1 00:07:03.526 --rc geninfo_unexecuted_blocks=1 00:07:03.526 00:07:03.526 ' 00:07:03.526 10:48:52 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:03.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.526 --rc genhtml_branch_coverage=1 00:07:03.526 --rc genhtml_function_coverage=1 00:07:03.526 --rc genhtml_legend=1 00:07:03.526 --rc geninfo_all_blocks=1 00:07:03.526 --rc geninfo_unexecuted_blocks=1 00:07:03.526 00:07:03.526 ' 00:07:03.526 10:48:52 version -- app/version.sh@17 -- # get_header_version major 00:07:03.526 10:48:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:03.526 10:48:52 version -- app/version.sh@14 -- # cut -f2 00:07:03.526 10:48:52 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.526 10:48:52 version -- app/version.sh@17 -- # major=25 00:07:03.526 10:48:52 version -- app/version.sh@18 -- # get_header_version minor 00:07:03.526 10:48:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:03.526 10:48:52 version -- app/version.sh@14 -- # cut -f2 00:07:03.526 10:48:52 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.526 10:48:52 version -- app/version.sh@18 -- # minor=1 00:07:03.526 10:48:52 version -- app/version.sh@19 -- # get_header_version patch 00:07:03.526 10:48:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:03.526 10:48:52 version -- app/version.sh@14 -- # cut -f2 00:07:03.526 10:48:52 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.526 10:48:52 version -- app/version.sh@19 -- # patch=0 00:07:03.526 10:48:52 version -- app/version.sh@20 -- # get_header_version suffix 00:07:03.526 10:48:52 version -- app/version.sh@14 -- # cut -f2 00:07:03.526 10:48:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:03.526 10:48:52 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.526 10:48:52 version -- app/version.sh@20 -- # suffix=-pre 00:07:03.526 10:48:52 version -- app/version.sh@22 -- # version=25.1 00:07:03.526 10:48:52 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:03.526 10:48:52 version -- app/version.sh@28 -- # version=25.1rc0 00:07:03.526 10:48:52 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:03.526 10:48:52 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:03.526 10:48:52 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:03.526 10:48:52 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:03.526 00:07:03.526 real 0m0.246s 00:07:03.526 user 0m0.154s 00:07:03.526 sys 0m0.132s 00:07:03.526 10:48:52 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:03.526 10:48:52 version -- common/autotest_common.sh@10 -- # set +x 00:07:03.526 ************************************ 00:07:03.526 END TEST version 00:07:03.526 ************************************ 00:07:03.526 10:48:52 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:03.526 10:48:52 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:03.526 10:48:52 -- spdk/autotest.sh@194 -- # uname -s 00:07:03.526 10:48:52 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:03.526 10:48:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:03.526 10:48:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:03.526 10:48:52 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:03.526 10:48:52 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:03.526 10:48:52 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:03.526 10:48:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:03.526 10:48:52 -- common/autotest_common.sh@10 -- # set +x 00:07:03.526 10:48:52 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:03.526 10:48:52 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:03.526 10:48:52 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:03.526 10:48:52 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:03.526 10:48:52 -- spdk/autotest.sh@276 -- # '[' rdma = rdma ']' 00:07:03.526 10:48:52 -- spdk/autotest.sh@277 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:03.526 10:48:52 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:03.526 10:48:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:03.526 10:48:52 -- common/autotest_common.sh@10 -- # set +x 00:07:03.526 ************************************ 00:07:03.526 START TEST nvmf_rdma 00:07:03.526 ************************************ 00:07:03.526 10:48:52 nvmf_rdma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:03.786 * Looking for test storage... 00:07:03.786 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:03.786 10:48:52 nvmf_rdma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:03.786 10:48:52 nvmf_rdma -- common/autotest_common.sh@1691 -- # lcov --version 00:07:03.786 10:48:52 nvmf_rdma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:03.786 10:48:52 nvmf_rdma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.786 10:48:52 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:07:03.786 10:48:52 nvmf_rdma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.786 10:48:52 nvmf_rdma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:03.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.786 --rc genhtml_branch_coverage=1 00:07:03.786 --rc genhtml_function_coverage=1 00:07:03.786 --rc genhtml_legend=1 00:07:03.786 --rc geninfo_all_blocks=1 00:07:03.786 --rc geninfo_unexecuted_blocks=1 00:07:03.786 00:07:03.786 ' 00:07:03.786 10:48:52 nvmf_rdma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:03.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.786 --rc genhtml_branch_coverage=1 00:07:03.786 --rc genhtml_function_coverage=1 00:07:03.786 --rc genhtml_legend=1 00:07:03.786 --rc geninfo_all_blocks=1 00:07:03.786 --rc geninfo_unexecuted_blocks=1 00:07:03.786 00:07:03.786 ' 00:07:03.786 10:48:52 nvmf_rdma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:03.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.786 --rc genhtml_branch_coverage=1 00:07:03.786 --rc genhtml_function_coverage=1 00:07:03.786 --rc genhtml_legend=1 00:07:03.786 --rc geninfo_all_blocks=1 00:07:03.786 --rc geninfo_unexecuted_blocks=1 00:07:03.786 00:07:03.786 ' 00:07:03.786 10:48:52 nvmf_rdma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:03.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.786 --rc genhtml_branch_coverage=1 00:07:03.786 --rc genhtml_function_coverage=1 00:07:03.786 --rc genhtml_legend=1 00:07:03.786 --rc geninfo_all_blocks=1 00:07:03.786 --rc geninfo_unexecuted_blocks=1 00:07:03.786 00:07:03.786 ' 00:07:03.786 10:48:52 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:07:03.786 10:48:52 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:03.786 10:48:52 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:07:03.786 10:48:52 nvmf_rdma -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:03.786 10:48:52 nvmf_rdma -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:03.786 10:48:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:03.786 ************************************ 00:07:03.786 START TEST nvmf_target_core 00:07:03.786 ************************************ 00:07:03.786 10:48:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:07:03.786 * Looking for test storage... 00:07:03.786 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:03.786 10:48:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:03.786 10:48:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:07:03.786 10:48:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:03.786 10:48:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:03.786 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.786 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.786 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:04.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.046 --rc genhtml_branch_coverage=1 00:07:04.046 --rc genhtml_function_coverage=1 00:07:04.046 --rc genhtml_legend=1 00:07:04.046 --rc geninfo_all_blocks=1 00:07:04.046 --rc geninfo_unexecuted_blocks=1 00:07:04.046 00:07:04.046 ' 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:04.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.046 --rc genhtml_branch_coverage=1 00:07:04.046 --rc genhtml_function_coverage=1 00:07:04.046 --rc genhtml_legend=1 00:07:04.046 --rc geninfo_all_blocks=1 00:07:04.046 --rc geninfo_unexecuted_blocks=1 00:07:04.046 00:07:04.046 ' 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:04.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.046 --rc genhtml_branch_coverage=1 00:07:04.046 --rc genhtml_function_coverage=1 00:07:04.046 --rc genhtml_legend=1 00:07:04.046 --rc geninfo_all_blocks=1 00:07:04.046 --rc geninfo_unexecuted_blocks=1 00:07:04.046 00:07:04.046 ' 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:04.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.046 --rc genhtml_branch_coverage=1 00:07:04.046 --rc genhtml_function_coverage=1 00:07:04.046 --rc genhtml_legend=1 00:07:04.046 --rc geninfo_all_blocks=1 00:07:04.046 --rc geninfo_unexecuted_blocks=1 00:07:04.046 00:07:04.046 ' 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.046 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:04.047 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:04.047 ************************************ 00:07:04.047 START TEST nvmf_abort 00:07:04.047 ************************************ 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:07:04.047 * Looking for test storage... 00:07:04.047 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:04.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.047 --rc genhtml_branch_coverage=1 00:07:04.047 --rc genhtml_function_coverage=1 00:07:04.047 --rc genhtml_legend=1 00:07:04.047 --rc geninfo_all_blocks=1 00:07:04.047 --rc geninfo_unexecuted_blocks=1 00:07:04.047 00:07:04.047 ' 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:04.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.047 --rc genhtml_branch_coverage=1 00:07:04.047 --rc genhtml_function_coverage=1 00:07:04.047 --rc genhtml_legend=1 00:07:04.047 --rc geninfo_all_blocks=1 00:07:04.047 --rc geninfo_unexecuted_blocks=1 00:07:04.047 00:07:04.047 ' 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:04.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.047 --rc genhtml_branch_coverage=1 00:07:04.047 --rc genhtml_function_coverage=1 00:07:04.047 --rc genhtml_legend=1 00:07:04.047 --rc geninfo_all_blocks=1 00:07:04.047 --rc geninfo_unexecuted_blocks=1 00:07:04.047 00:07:04.047 ' 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:04.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.047 --rc genhtml_branch_coverage=1 00:07:04.047 --rc genhtml_function_coverage=1 00:07:04.047 --rc genhtml_legend=1 00:07:04.047 --rc geninfo_all_blocks=1 00:07:04.047 --rc geninfo_unexecuted_blocks=1 00:07:04.047 00:07:04.047 ' 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:04.047 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:04.306 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.306 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.306 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.306 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.306 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.306 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.306 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:04.306 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.306 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:04.306 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:04.306 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:04.306 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:04.306 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.306 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.306 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:04.306 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:04.306 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:04.306 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:04.306 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:04.306 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:04.307 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:04.307 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:04.307 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:04.307 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:04.307 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:04.307 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:04.307 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:04.307 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.307 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:04.307 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.307 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:04.307 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:04.307 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:04.307 10:48:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:07:09.574 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:07:09.574 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:07:09.574 Found net devices under 0000:af:00.0: mlx_0_0 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:09.574 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:07:09.575 Found net devices under 0000:af:00.1: mlx_0_1 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # rdma_device_init 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:09.575 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:09.575 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:07:09.575 altname enp175s0f0np0 00:07:09.575 altname ens801f0np0 00:07:09.575 inet 192.168.100.8/24 scope global mlx_0_0 00:07:09.575 valid_lft forever preferred_lft forever 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:09.575 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:09.575 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:07:09.575 altname enp175s0f1np1 00:07:09.575 altname ens801f1np1 00:07:09.575 inet 192.168.100.9/24 scope global mlx_0_1 00:07:09.575 valid_lft forever preferred_lft forever 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:09.575 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:09.576 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:09.576 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:09.576 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:09.576 192.168.100.9' 00:07:09.576 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:09.576 192.168.100.9' 00:07:09.576 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # head -n 1 00:07:09.576 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:09.576 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:09.576 192.168.100.9' 00:07:09.576 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # tail -n +2 00:07:09.576 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # head -n 1 00:07:09.576 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:09.576 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:09.576 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:09.576 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:09.576 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:09.576 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:09.576 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:09.576 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:09.576 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:09.576 10:48:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.576 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1290661 00:07:09.576 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:09.576 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1290661 00:07:09.576 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 1290661 ']' 00:07:09.576 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.576 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:09.576 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.576 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:09.576 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.576 [2024-11-15 10:48:58.054980] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:07:09.576 [2024-11-15 10:48:58.055034] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.576 [2024-11-15 10:48:58.120372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.576 [2024-11-15 10:48:58.162937] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.576 [2024-11-15 10:48:58.162979] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.576 [2024-11-15 10:48:58.162986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.576 [2024-11-15 10:48:58.162992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.576 [2024-11-15 10:48:58.162996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.576 [2024-11-15 10:48:58.164524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.576 [2024-11-15 10:48:58.164593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.576 [2024-11-15 10:48:58.164595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.576 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:09.576 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:07:09.576 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:09.576 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:09.576 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.576 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.576 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:07:09.576 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.576 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.576 [2024-11-15 10:48:58.334719] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24099e0/0x240ded0) succeed. 00:07:09.576 [2024-11-15 10:48:58.350862] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x240afd0/0x244f570) succeed. 00:07:09.576 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.576 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:09.576 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.576 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.835 Malloc0 00:07:09.835 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.835 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:09.835 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.835 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.835 Delay0 00:07:09.835 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.835 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:09.835 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.835 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.835 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.835 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:09.835 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.835 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.835 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.835 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:09.835 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.835 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.835 [2024-11-15 10:48:58.517373] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:09.835 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.835 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:09.835 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.835 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.835 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.835 10:48:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:09.835 [2024-11-15 10:48:58.623697] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:12.365 Initializing NVMe Controllers 00:07:12.365 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:07:12.365 controller IO queue size 128 less than required 00:07:12.365 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:12.365 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:12.365 Initialization complete. Launching workers. 00:07:12.365 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41790 00:07:12.365 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41851, failed to submit 62 00:07:12.365 success 41791, unsuccessful 60, failed 0 00:07:12.365 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:12.365 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.365 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.365 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.365 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:12.365 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:12.365 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:12.365 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:12.365 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:12.365 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:12.365 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:12.365 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:12.365 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:12.365 rmmod nvme_rdma 00:07:12.365 rmmod nvme_fabrics 00:07:12.365 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:12.365 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:12.365 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:12.365 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1290661 ']' 00:07:12.365 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1290661 00:07:12.365 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 1290661 ']' 00:07:12.365 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 1290661 00:07:12.365 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:07:12.365 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:12.366 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1290661 00:07:12.366 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:12.366 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:12.366 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1290661' 00:07:12.366 killing process with pid 1290661 00:07:12.366 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 1290661 00:07:12.366 10:49:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 1290661 00:07:12.366 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:12.366 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:12.366 00:07:12.366 real 0m8.318s 00:07:12.366 user 0m12.245s 00:07:12.366 sys 0m4.231s 00:07:12.366 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:12.366 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.366 ************************************ 00:07:12.366 END TEST nvmf_abort 00:07:12.366 ************************************ 00:07:12.366 10:49:01 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:07:12.366 10:49:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:12.366 10:49:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:12.366 10:49:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:12.366 ************************************ 00:07:12.366 START TEST nvmf_ns_hotplug_stress 00:07:12.366 ************************************ 00:07:12.366 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:07:12.366 * Looking for test storage... 00:07:12.366 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:12.366 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:12.366 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:07:12.366 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:12.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.625 --rc genhtml_branch_coverage=1 00:07:12.625 --rc genhtml_function_coverage=1 00:07:12.625 --rc genhtml_legend=1 00:07:12.625 --rc geninfo_all_blocks=1 00:07:12.625 --rc geninfo_unexecuted_blocks=1 00:07:12.625 00:07:12.625 ' 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:12.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.625 --rc genhtml_branch_coverage=1 00:07:12.625 --rc genhtml_function_coverage=1 00:07:12.625 --rc genhtml_legend=1 00:07:12.625 --rc geninfo_all_blocks=1 00:07:12.625 --rc geninfo_unexecuted_blocks=1 00:07:12.625 00:07:12.625 ' 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:12.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.625 --rc genhtml_branch_coverage=1 00:07:12.625 --rc genhtml_function_coverage=1 00:07:12.625 --rc genhtml_legend=1 00:07:12.625 --rc geninfo_all_blocks=1 00:07:12.625 --rc geninfo_unexecuted_blocks=1 00:07:12.625 00:07:12.625 ' 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:12.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.625 --rc genhtml_branch_coverage=1 00:07:12.625 --rc genhtml_function_coverage=1 00:07:12.625 --rc genhtml_legend=1 00:07:12.625 --rc geninfo_all_blocks=1 00:07:12.625 --rc geninfo_unexecuted_blocks=1 00:07:12.625 00:07:12.625 ' 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.625 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.626 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.626 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:12.626 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:12.626 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:12.626 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:12.626 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:12.626 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:12.626 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:12.626 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:12.626 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.626 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:12.626 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:12.626 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:12.626 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.626 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.626 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.626 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:12.626 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:12.626 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:12.626 10:49:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:07:17.895 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:07:17.895 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:07:17.895 Found net devices under 0000:af:00.0: mlx_0_0 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:07:17.895 Found net devices under 0000:af:00.1: mlx_0_1 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:17.895 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:17.896 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:17.896 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:07:17.896 altname enp175s0f0np0 00:07:17.896 altname ens801f0np0 00:07:17.896 inet 192.168.100.8/24 scope global mlx_0_0 00:07:17.896 valid_lft forever preferred_lft forever 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:17.896 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:17.896 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:07:17.896 altname enp175s0f1np1 00:07:17.896 altname ens801f1np1 00:07:17.896 inet 192.168.100.9/24 scope global mlx_0_1 00:07:17.896 valid_lft forever preferred_lft forever 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:17.896 192.168.100.9' 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:17.896 192.168.100.9' 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # head -n 1 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:17.896 192.168.100.9' 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # tail -n +2 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # head -n 1 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:17.896 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1294254 00:07:17.897 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:17.897 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1294254 00:07:17.897 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 1294254 ']' 00:07:17.897 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.897 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:17.897 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.897 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:17.897 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:17.897 [2024-11-15 10:49:06.303632] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:07:17.897 [2024-11-15 10:49:06.303691] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.897 [2024-11-15 10:49:06.367921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.897 [2024-11-15 10:49:06.407373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.897 [2024-11-15 10:49:06.407412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.897 [2024-11-15 10:49:06.407419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.897 [2024-11-15 10:49:06.407426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.897 [2024-11-15 10:49:06.407431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.897 [2024-11-15 10:49:06.408894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.897 [2024-11-15 10:49:06.408961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.897 [2024-11-15 10:49:06.408963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.897 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:17.897 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:07:17.897 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:17.897 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:17.897 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:17.897 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.897 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:17.897 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:17.897 [2024-11-15 10:49:06.743089] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd2f9e0/0xd33ed0) succeed. 00:07:17.897 [2024-11-15 10:49:06.752293] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd30fd0/0xd75570) succeed. 00:07:18.159 10:49:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:18.416 10:49:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:18.416 [2024-11-15 10:49:07.261988] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:18.416 10:49:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:18.673 10:49:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:18.932 Malloc0 00:07:18.932 10:49:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:19.190 Delay0 00:07:19.190 10:49:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.448 10:49:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:19.448 NULL1 00:07:19.448 10:49:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:19.705 10:49:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1294742 00:07:19.705 10:49:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:19.705 10:49:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:19.705 10:49:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.080 Read completed with error (sct=0, sc=11) 00:07:21.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.080 10:49:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.080 10:49:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:21.080 10:49:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:21.338 true 00:07:21.338 10:49:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:21.338 10:49:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.273 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:22.273 10:49:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.273 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:22.273 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:22.273 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:22.273 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:22.273 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:22.273 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:22.273 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:22.273 10:49:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:22.273 10:49:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:22.531 true 00:07:22.531 10:49:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:22.531 10:49:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.467 10:49:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.467 10:49:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:23.467 10:49:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:23.726 true 00:07:23.726 10:49:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:23.726 10:49:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.723 10:49:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.723 10:49:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:24.723 10:49:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:25.033 true 00:07:25.033 10:49:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:25.033 10:49:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.968 10:49:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.968 10:49:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:25.968 10:49:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:26.226 true 00:07:26.226 10:49:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:26.226 10:49:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.161 10:49:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.161 10:49:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:27.161 10:49:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:27.419 true 00:07:27.419 10:49:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:27.419 10:49:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.352 10:49:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.352 10:49:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:28.352 10:49:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:28.610 true 00:07:28.611 10:49:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:28.611 10:49:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.543 10:49:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.544 10:49:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:29.544 10:49:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:29.800 true 00:07:29.800 10:49:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:29.800 10:49:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.734 10:49:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.992 10:49:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:30.992 10:49:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:30.992 true 00:07:30.992 10:49:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:30.992 10:49:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.927 10:49:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.185 10:49:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:32.185 10:49:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:32.185 true 00:07:32.185 10:49:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:32.185 10:49:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.121 10:49:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.379 10:49:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:33.379 10:49:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:33.637 true 00:07:33.637 10:49:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:33.637 10:49:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.572 10:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.572 10:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:34.572 10:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:34.831 true 00:07:34.831 10:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:34.831 10:49:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.767 10:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.767 10:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:35.767 10:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:36.025 true 00:07:36.025 10:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:36.025 10:49:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.976 10:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.976 10:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:36.976 10:49:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:37.233 true 00:07:37.233 10:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:37.233 10:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.167 10:49:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.167 10:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:38.167 10:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:38.426 true 00:07:38.426 10:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:38.426 10:49:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.361 10:49:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.361 10:49:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:39.361 10:49:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:39.619 true 00:07:39.620 10:49:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:39.620 10:49:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.555 10:49:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.813 10:49:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:40.813 10:49:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:40.813 true 00:07:40.814 10:49:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:40.814 10:49:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.749 10:49:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.007 10:49:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:42.007 10:49:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:42.007 true 00:07:42.007 10:49:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:42.007 10:49:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.942 10:49:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.201 10:49:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:43.201 10:49:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:43.201 true 00:07:43.459 10:49:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:43.459 10:49:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.025 10:49:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.283 10:49:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:44.283 10:49:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:44.541 true 00:07:44.541 10:49:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:44.541 10:49:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.475 10:49:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.475 10:49:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:45.475 10:49:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:45.734 true 00:07:45.734 10:49:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:45.734 10:49:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.668 10:49:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.668 10:49:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:46.668 10:49:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:46.926 true 00:07:46.926 10:49:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:46.926 10:49:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.860 10:49:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.860 10:49:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:47.860 10:49:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:48.117 true 00:07:48.117 10:49:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:48.117 10:49:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.051 10:49:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.308 10:49:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:49.308 10:49:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:49.308 true 00:07:49.308 10:49:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:49.308 10:49:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.243 10:49:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.502 10:49:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:50.502 10:49:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:50.502 true 00:07:50.502 10:49:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:50.502 10:49:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.760 10:49:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.068 10:49:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:51.068 10:49:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:51.068 true 00:07:51.326 10:49:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:51.326 10:49:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.326 10:49:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.584 10:49:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:51.584 10:49:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:51.843 true 00:07:51.843 10:49:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:51.843 10:49:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.100 10:49:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.100 Initializing NVMe Controllers 00:07:52.100 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:52.100 Controller IO queue size 128, less than required. 00:07:52.100 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:52.101 Controller IO queue size 128, less than required. 00:07:52.101 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:52.101 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:52.101 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:52.101 Initialization complete. Launching workers. 00:07:52.101 ======================================================== 00:07:52.101 Latency(us) 00:07:52.101 Device Information : IOPS MiB/s Average min max 00:07:52.101 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6123.33 2.99 18690.79 909.34 1142326.67 00:07:52.101 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 32527.27 15.88 3934.89 1528.05 300964.42 00:07:52.101 ======================================================== 00:07:52.101 Total : 38650.60 18.87 6272.64 909.34 1142326.67 00:07:52.101 00:07:52.359 10:49:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:52.359 10:49:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:52.359 true 00:07:52.359 10:49:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1294742 00:07:52.359 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1294742) - No such process 00:07:52.359 10:49:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1294742 00:07:52.359 10:49:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.617 10:49:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:52.876 10:49:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:52.876 10:49:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:52.876 10:49:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:52.876 10:49:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.876 10:49:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:52.876 null0 00:07:53.135 10:49:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:53.135 10:49:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:53.135 10:49:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:53.135 null1 00:07:53.135 10:49:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:53.135 10:49:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:53.135 10:49:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:53.394 null2 00:07:53.394 10:49:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:53.394 10:49:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:53.394 10:49:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:53.652 null3 00:07:53.652 10:49:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:53.652 10:49:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:53.652 10:49:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:53.652 null4 00:07:53.910 10:49:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:53.910 10:49:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:53.910 10:49:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:53.910 null5 00:07:53.910 10:49:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:53.910 10:49:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:53.910 10:49:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:54.168 null6 00:07:54.168 10:49:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:54.168 10:49:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:54.168 10:49:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:54.427 null7 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1300667 1300668 1300670 1300672 1300674 1300676 1300678 1300680 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.427 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.686 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.945 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.945 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.945 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:54.945 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.945 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:54.945 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.945 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.945 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.203 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.203 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.203 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.203 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.203 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.203 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.203 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.203 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.203 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.203 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.203 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.204 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.204 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.204 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.204 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.204 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.204 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.204 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:55.204 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.204 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.204 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.204 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.204 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.204 10:49:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.462 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.462 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.462 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.462 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.462 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.462 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.462 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.462 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.719 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.719 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.719 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.720 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.978 10:49:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:56.237 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.237 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.237 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.237 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:56.237 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:56.237 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:56.237 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:56.237 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.495 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:56.754 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:56.754 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:56.754 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.754 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:56.754 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:56.754 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.754 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:56.754 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.013 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.013 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.013 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:57.014 10:49:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.273 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:57.531 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:57.531 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.531 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:57.531 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.531 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.531 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:57.531 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:57.531 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:57.790 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.048 10:49:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:58.307 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:58.307 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:58.307 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:58.307 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:58.307 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:58.307 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:58.307 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.307 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:58.565 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.565 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.565 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.565 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.565 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.565 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.565 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.565 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.565 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.565 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.565 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.565 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.565 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.565 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.565 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:58.566 rmmod nvme_rdma 00:07:58.566 rmmod nvme_fabrics 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1294254 ']' 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1294254 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 1294254 ']' 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 1294254 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1294254 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1294254' 00:07:58.566 killing process with pid 1294254 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 1294254 00:07:58.566 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 1294254 00:07:58.824 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:58.824 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:58.824 00:07:58.824 real 0m46.530s 00:07:58.824 user 3m22.452s 00:07:58.824 sys 0m10.989s 00:07:58.824 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.824 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:58.824 ************************************ 00:07:58.824 END TEST nvmf_ns_hotplug_stress 00:07:58.824 ************************************ 00:07:58.824 10:49:47 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:07:58.824 10:49:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:58.824 10:49:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:58.824 10:49:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:59.084 ************************************ 00:07:59.084 START TEST nvmf_delete_subsystem 00:07:59.084 ************************************ 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:07:59.084 * Looking for test storage... 00:07:59.084 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:59.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.084 --rc genhtml_branch_coverage=1 00:07:59.084 --rc genhtml_function_coverage=1 00:07:59.084 --rc genhtml_legend=1 00:07:59.084 --rc geninfo_all_blocks=1 00:07:59.084 --rc geninfo_unexecuted_blocks=1 00:07:59.084 00:07:59.084 ' 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:59.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.084 --rc genhtml_branch_coverage=1 00:07:59.084 --rc genhtml_function_coverage=1 00:07:59.084 --rc genhtml_legend=1 00:07:59.084 --rc geninfo_all_blocks=1 00:07:59.084 --rc geninfo_unexecuted_blocks=1 00:07:59.084 00:07:59.084 ' 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:59.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.084 --rc genhtml_branch_coverage=1 00:07:59.084 --rc genhtml_function_coverage=1 00:07:59.084 --rc genhtml_legend=1 00:07:59.084 --rc geninfo_all_blocks=1 00:07:59.084 --rc geninfo_unexecuted_blocks=1 00:07:59.084 00:07:59.084 ' 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:59.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.084 --rc genhtml_branch_coverage=1 00:07:59.084 --rc genhtml_function_coverage=1 00:07:59.084 --rc genhtml_legend=1 00:07:59.084 --rc geninfo_all_blocks=1 00:07:59.084 --rc geninfo_unexecuted_blocks=1 00:07:59.084 00:07:59.084 ' 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.084 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:59.085 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:59.085 10:49:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:08:05.651 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:08:05.651 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:08:05.651 Found net devices under 0000:af:00.0: mlx_0_0 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:08:05.651 Found net devices under 0000:af:00.1: mlx_0_1 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # rdma_device_init 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:05.651 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:05.652 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:05.652 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:08:05.652 altname enp175s0f0np0 00:08:05.652 altname ens801f0np0 00:08:05.652 inet 192.168.100.8/24 scope global mlx_0_0 00:08:05.652 valid_lft forever preferred_lft forever 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:05.652 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:05.652 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:08:05.652 altname enp175s0f1np1 00:08:05.652 altname ens801f1np1 00:08:05.652 inet 192.168.100.9/24 scope global mlx_0_1 00:08:05.652 valid_lft forever preferred_lft forever 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:05.652 192.168.100.9' 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:05.652 192.168.100.9' 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # head -n 1 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:05.652 192.168.100.9' 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # tail -n +2 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # head -n 1 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1304718 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1304718 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 1304718 ']' 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:05.652 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.652 [2024-11-15 10:49:53.666525] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:08:05.652 [2024-11-15 10:49:53.666575] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.653 [2024-11-15 10:49:53.732152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:05.653 [2024-11-15 10:49:53.772293] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.653 [2024-11-15 10:49:53.772331] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.653 [2024-11-15 10:49:53.772339] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:05.653 [2024-11-15 10:49:53.772346] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:05.653 [2024-11-15 10:49:53.772351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.653 [2024-11-15 10:49:53.773611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.653 [2024-11-15 10:49:53.773615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.653 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:05.653 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:08:05.653 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:05.653 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:05.653 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.653 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.653 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:05.653 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.653 10:49:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.653 [2024-11-15 10:49:53.925834] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xea7b50/0xeac040) succeed. 00:08:05.653 [2024-11-15 10:49:53.934937] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xea90a0/0xeed6e0) succeed. 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.653 [2024-11-15 10:49:54.020678] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.653 NULL1 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.653 Delay0 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1304888 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:05.653 10:49:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:05.653 [2024-11-15 10:49:54.135355] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:07.553 10:49:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.553 10:49:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.553 10:49:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.486 NVMe io qpair process completion error 00:08:08.486 NVMe io qpair process completion error 00:08:08.486 NVMe io qpair process completion error 00:08:08.487 NVMe io qpair process completion error 00:08:08.487 NVMe io qpair process completion error 00:08:08.487 NVMe io qpair process completion error 00:08:08.487 10:49:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.487 10:49:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:08.487 10:49:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1304888 00:08:08.487 10:49:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:09.053 10:49:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:09.053 10:49:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1304888 00:08:09.053 10:49:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:09.311 Write completed with error (sct=0, sc=8) 00:08:09.311 starting I/O failed: -6 00:08:09.311 Write completed with error (sct=0, sc=8) 00:08:09.311 starting I/O failed: -6 00:08:09.311 Read completed with error (sct=0, sc=8) 00:08:09.311 starting I/O failed: -6 00:08:09.311 Write completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Write completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.312 Read completed with error (sct=0, sc=8) 00:08:09.312 starting I/O failed: -6 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 starting I/O failed: -6 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Read completed with error (sct=0, sc=8) 00:08:09.313 Write completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Write completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Write completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Write completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Write completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Write completed with error (sct=0, sc=8) 00:08:09.598 Write completed with error (sct=0, sc=8) 00:08:09.598 Write completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Write completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Read completed with error (sct=0, sc=8) 00:08:09.598 Initializing NVMe Controllers 00:08:09.598 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:09.598 Controller IO queue size 128, less than required. 00:08:09.598 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:09.598 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:09.598 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:09.598 Initialization complete. Launching workers. 00:08:09.598 ======================================================== 00:08:09.598 Latency(us) 00:08:09.598 Device Information : IOPS MiB/s Average min max 00:08:09.598 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.48 0.04 1593527.81 1000175.14 2974590.33 00:08:09.598 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.48 0.04 1594607.51 1001722.52 2975066.57 00:08:09.598 ======================================================== 00:08:09.598 Total : 160.95 0.08 1594067.66 1000175.14 2975066.57 00:08:09.598 00:08:09.598 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:09.598 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1304888 00:08:09.599 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:09.599 [2024-11-15 10:49:58.233512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:08:09.599 [2024-11-15 10:49:58.233549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:08:09.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1304888 00:08:09.905 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1304888) - No such process 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1304888 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1304888 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1304888 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.905 [2024-11-15 10:49:58.757102] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1305601 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1305601 00:08:09.905 10:49:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:10.238 [2024-11-15 10:49:58.848966] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:10.495 10:49:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:10.495 10:49:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1305601 00:08:10.495 10:49:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:11.061 10:49:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:11.061 10:49:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1305601 00:08:11.061 10:49:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:11.626 10:50:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:11.626 10:50:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1305601 00:08:11.627 10:50:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:12.193 10:50:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:12.193 10:50:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1305601 00:08:12.193 10:50:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:12.449 10:50:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:12.449 10:50:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1305601 00:08:12.449 10:50:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:13.013 10:50:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:13.013 10:50:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1305601 00:08:13.013 10:50:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:13.578 10:50:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:13.578 10:50:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1305601 00:08:13.578 10:50:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:14.141 10:50:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:14.142 10:50:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1305601 00:08:14.142 10:50:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:14.708 10:50:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:14.708 10:50:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1305601 00:08:14.708 10:50:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:14.966 10:50:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:14.966 10:50:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1305601 00:08:14.966 10:50:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:15.530 10:50:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.530 10:50:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1305601 00:08:15.530 10:50:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.094 10:50:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:16.095 10:50:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1305601 00:08:16.095 10:50:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.659 10:50:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:16.659 10:50:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1305601 00:08:16.659 10:50:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:17.225 10:50:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:17.225 10:50:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1305601 00:08:17.225 10:50:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:17.225 Initializing NVMe Controllers 00:08:17.225 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:17.225 Controller IO queue size 128, less than required. 00:08:17.225 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:17.225 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:17.225 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:17.225 Initialization complete. Launching workers. 00:08:17.225 ======================================================== 00:08:17.225 Latency(us) 00:08:17.225 Device Information : IOPS MiB/s Average min max 00:08:17.225 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001182.02 1000052.51 1004135.04 00:08:17.225 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002449.76 1000060.08 1005859.04 00:08:17.225 ======================================================== 00:08:17.225 Total : 256.00 0.12 1001815.89 1000052.51 1005859.04 00:08:17.225 00:08:17.484 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:17.484 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1305601 00:08:17.484 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1305601) - No such process 00:08:17.484 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1305601 00:08:17.484 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:17.484 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:17.484 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:17.484 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:17.484 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:17.484 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:17.484 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:17.484 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:17.484 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:17.484 rmmod nvme_rdma 00:08:17.484 rmmod nvme_fabrics 00:08:17.745 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:17.745 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:17.745 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:17.745 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1304718 ']' 00:08:17.745 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1304718 00:08:17.745 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 1304718 ']' 00:08:17.745 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 1304718 00:08:17.745 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:08:17.745 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:17.745 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1304718 00:08:17.745 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:17.745 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:17.745 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1304718' 00:08:17.745 killing process with pid 1304718 00:08:17.745 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 1304718 00:08:17.745 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 1304718 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:18.003 00:08:18.003 real 0m18.912s 00:08:18.003 user 0m48.818s 00:08:18.003 sys 0m5.227s 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:18.003 ************************************ 00:08:18.003 END TEST nvmf_delete_subsystem 00:08:18.003 ************************************ 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:18.003 ************************************ 00:08:18.003 START TEST nvmf_host_management 00:08:18.003 ************************************ 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:08:18.003 * Looking for test storage... 00:08:18.003 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:18.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.003 --rc genhtml_branch_coverage=1 00:08:18.003 --rc genhtml_function_coverage=1 00:08:18.003 --rc genhtml_legend=1 00:08:18.003 --rc geninfo_all_blocks=1 00:08:18.003 --rc geninfo_unexecuted_blocks=1 00:08:18.003 00:08:18.003 ' 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:18.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.003 --rc genhtml_branch_coverage=1 00:08:18.003 --rc genhtml_function_coverage=1 00:08:18.003 --rc genhtml_legend=1 00:08:18.003 --rc geninfo_all_blocks=1 00:08:18.003 --rc geninfo_unexecuted_blocks=1 00:08:18.003 00:08:18.003 ' 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:18.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.003 --rc genhtml_branch_coverage=1 00:08:18.003 --rc genhtml_function_coverage=1 00:08:18.003 --rc genhtml_legend=1 00:08:18.003 --rc geninfo_all_blocks=1 00:08:18.003 --rc geninfo_unexecuted_blocks=1 00:08:18.003 00:08:18.003 ' 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:18.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.003 --rc genhtml_branch_coverage=1 00:08:18.003 --rc genhtml_function_coverage=1 00:08:18.003 --rc genhtml_legend=1 00:08:18.003 --rc geninfo_all_blocks=1 00:08:18.003 --rc geninfo_unexecuted_blocks=1 00:08:18.003 00:08:18.003 ' 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.003 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:18.262 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:18.262 10:50:06 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.533 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:08:23.534 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:08:23.534 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:08:23.534 Found net devices under 0000:af:00.0: mlx_0_0 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:08:23.534 Found net devices under 0000:af:00.1: mlx_0_1 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # rdma_device_init 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:23.534 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:23.534 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:08:23.534 altname enp175s0f0np0 00:08:23.534 altname ens801f0np0 00:08:23.534 inet 192.168.100.8/24 scope global mlx_0_0 00:08:23.534 valid_lft forever preferred_lft forever 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:23.534 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:23.534 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:08:23.534 altname enp175s0f1np1 00:08:23.534 altname ens801f1np1 00:08:23.534 inet 192.168.100.9/24 scope global mlx_0_1 00:08:23.534 valid_lft forever preferred_lft forever 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:23.534 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:23.535 192.168.100.9' 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # head -n 1 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:23.535 192.168.100.9' 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:23.535 192.168.100.9' 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # tail -n +2 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # head -n 1 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1310117 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1310117 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1310117 ']' 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:23.535 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.535 [2024-11-15 10:50:12.280493] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:08:23.535 [2024-11-15 10:50:12.280537] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.535 [2024-11-15 10:50:12.342695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.535 [2024-11-15 10:50:12.386179] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.535 [2024-11-15 10:50:12.386215] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.535 [2024-11-15 10:50:12.386222] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.535 [2024-11-15 10:50:12.386228] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.535 [2024-11-15 10:50:12.386235] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.535 [2024-11-15 10:50:12.387789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.535 [2024-11-15 10:50:12.387875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.535 [2024-11-15 10:50:12.387997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.535 [2024-11-15 10:50:12.387998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:23.794 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:23.794 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:08:23.794 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:23.794 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:23.794 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.794 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.794 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:23.794 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.794 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.794 [2024-11-15 10:50:12.546694] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x244f530/0x2453a20) succeed. 00:08:23.794 [2024-11-15 10:50:12.555969] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2450bc0/0x24950c0) succeed. 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.051 Malloc0 00:08:24.051 [2024-11-15 10:50:12.742120] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1310167 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1310167 /var/tmp/bdevperf.sock 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1310167 ']' 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:24.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:24.051 { 00:08:24.051 "params": { 00:08:24.051 "name": "Nvme$subsystem", 00:08:24.051 "trtype": "$TEST_TRANSPORT", 00:08:24.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.051 "adrfam": "ipv4", 00:08:24.051 "trsvcid": "$NVMF_PORT", 00:08:24.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.051 "hdgst": ${hdgst:-false}, 00:08:24.051 "ddgst": ${ddgst:-false} 00:08:24.051 }, 00:08:24.051 "method": "bdev_nvme_attach_controller" 00:08:24.051 } 00:08:24.051 EOF 00:08:24.051 )") 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:24.051 10:50:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:24.051 "params": { 00:08:24.051 "name": "Nvme0", 00:08:24.051 "trtype": "rdma", 00:08:24.051 "traddr": "192.168.100.8", 00:08:24.051 "adrfam": "ipv4", 00:08:24.051 "trsvcid": "4420", 00:08:24.051 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:24.051 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:24.051 "hdgst": false, 00:08:24.051 "ddgst": false 00:08:24.051 }, 00:08:24.051 "method": "bdev_nvme_attach_controller" 00:08:24.051 }' 00:08:24.051 [2024-11-15 10:50:12.836373] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:08:24.051 [2024-11-15 10:50:12.836417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1310167 ] 00:08:24.051 [2024-11-15 10:50:12.900056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.309 [2024-11-15 10:50:12.941886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.309 Running I/O for 10 seconds... 00:08:24.309 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:24.309 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:08:24.309 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:24.309 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.309 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=174 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 174 -ge 100 ']' 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.567 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:24.568 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.568 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.568 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.568 10:50:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:25.511 267.00 IOPS, 16.69 MiB/s [2024-11-15T09:50:14.395Z] [2024-11-15 10:50:14.253738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:25.511 [2024-11-15 10:50:14.253767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:cff200 sqhd:6ba0 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.253777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:25.511 [2024-11-15 10:50:14.253785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:cff200 sqhd:6ba0 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.253792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:25.511 [2024-11-15 10:50:14.253799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:cff200 sqhd:6ba0 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.253806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:25.511 [2024-11-15 10:50:14.253813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:cff200 sqhd:6ba0 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.255216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:08:25.511 [2024-11-15 10:50:14.255232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:08:25.511 [2024-11-15 10:50:14.255255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d1f980 len:0x10000 key:0x188900 00:08:25.511 [2024-11-15 10:50:14.255264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.255282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d0f900 len:0x10000 key:0x188900 00:08:25.511 [2024-11-15 10:50:14.255289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.255301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cff880 len:0x10000 key:0x188900 00:08:25.511 [2024-11-15 10:50:14.255308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.255319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cef800 len:0x10000 key:0x188900 00:08:25.511 [2024-11-15 10:50:14.255326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.255337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cdf780 len:0x10000 key:0x188900 00:08:25.511 [2024-11-15 10:50:14.255343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.255354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ccf700 len:0x10000 key:0x188900 00:08:25.511 [2024-11-15 10:50:14.255361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.255371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cbf680 len:0x10000 key:0x188900 00:08:25.511 [2024-11-15 10:50:14.255378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.255389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000caf600 len:0x10000 key:0x188900 00:08:25.511 [2024-11-15 10:50:14.255396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.255406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c9f580 len:0x10000 key:0x188900 00:08:25.511 [2024-11-15 10:50:14.255413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.255424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c8f500 len:0x10000 key:0x188900 00:08:25.511 [2024-11-15 10:50:14.255431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.255441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c7f480 len:0x10000 key:0x188900 00:08:25.511 [2024-11-15 10:50:14.255451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.255462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c6f400 len:0x10000 key:0x188900 00:08:25.511 [2024-11-15 10:50:14.255469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.255480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c5f380 len:0x10000 key:0x188900 00:08:25.511 [2024-11-15 10:50:14.255487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.255498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c4f300 len:0x10000 key:0x188900 00:08:25.511 [2024-11-15 10:50:14.255505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.255516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c3f280 len:0x10000 key:0x188900 00:08:25.511 [2024-11-15 10:50:14.255523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.255534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c2f200 len:0x10000 key:0x188900 00:08:25.511 [2024-11-15 10:50:14.255541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.255551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c1f180 len:0x10000 key:0x188900 00:08:25.511 [2024-11-15 10:50:14.255558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.255569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c0f100 len:0x10000 key:0x188900 00:08:25.511 [2024-11-15 10:50:14.255576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.255586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ff0000 len:0x10000 key:0x188a00 00:08:25.511 [2024-11-15 10:50:14.255593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.511 [2024-11-15 10:50:14.255603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000fdff80 len:0x10000 key:0x188a00 00:08:25.511 [2024-11-15 10:50:14.255611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.255621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bf0000 len:0x10000 key:0x188800 00:08:25.512 [2024-11-15 10:50:14.255629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.255639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009f64000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.255648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.255660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009f43000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.255667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.255678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009f22000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.255685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.255696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009f01000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.255703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.255714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009ee0000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.255721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.255732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a2df000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.255739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.255750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a2be000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.255757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.255767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a29d000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.255774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.255785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a27c000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.255792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.255803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a25b000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.255810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.255821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a23a000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.255829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.255840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a219000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.255847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.255859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a1f8000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.255866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.255877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a1d7000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.255884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.255896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a1b6000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.255904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.255914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a195000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.255922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.255935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a174000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.255942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.255954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a153000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.255961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.255972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a132000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.255979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.255990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a111000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.255997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.256008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a0f0000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.256015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.256026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a4ef000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.256033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.256044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a4ce000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.256052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.256064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a4ad000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.256071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.256082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a48c000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.256090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.256101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a46b000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.256108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.256119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a44a000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.256126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.256137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a429000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.256144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.256155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a408000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.256167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.256179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a3e7000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.256186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.256197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a3c6000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.256204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.256215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a3a5000 len:0x10000 key:0x188300 00:08:25.512 [2024-11-15 10:50:14.256222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.256236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bdff80 len:0x10000 key:0x188800 00:08:25.512 [2024-11-15 10:50:14.256244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.256254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bcff00 len:0x10000 key:0x188800 00:08:25.512 [2024-11-15 10:50:14.256261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.256271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bbfe80 len:0x10000 key:0x188800 00:08:25.512 [2024-11-15 10:50:14.256280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.512 [2024-11-15 10:50:14.256291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bafe00 len:0x10000 key:0x188800 00:08:25.513 [2024-11-15 10:50:14.256298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.513 [2024-11-15 10:50:14.256308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b9fd80 len:0x10000 key:0x188800 00:08:25.513 [2024-11-15 10:50:14.256315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.513 [2024-11-15 10:50:14.256325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b8fd00 len:0x10000 key:0x188800 00:08:25.513 [2024-11-15 10:50:14.256333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.513 [2024-11-15 10:50:14.256343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b7fc80 len:0x10000 key:0x188800 00:08:25.513 [2024-11-15 10:50:14.256350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.513 [2024-11-15 10:50:14.256361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b6fc00 len:0x10000 key:0x188800 00:08:25.513 [2024-11-15 10:50:14.256367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.513 [2024-11-15 10:50:14.256378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b5fb80 len:0x10000 key:0x188800 00:08:25.513 [2024-11-15 10:50:14.256385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.513 [2024-11-15 10:50:14.256396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b4fb00 len:0x10000 key:0x188800 00:08:25.513 [2024-11-15 10:50:14.256403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.513 [2024-11-15 10:50:14.256413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b3fa80 len:0x10000 key:0x188800 00:08:25.513 [2024-11-15 10:50:14.256420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4952b000 sqhd:7210 p:0 m:0 dnr:0 00:08:25.513 10:50:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1310167 00:08:25.513 10:50:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:25.513 10:50:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:25.513 10:50:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:25.513 10:50:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:25.513 10:50:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:25.513 10:50:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:25.513 10:50:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:25.513 { 00:08:25.513 "params": { 00:08:25.513 "name": "Nvme$subsystem", 00:08:25.513 "trtype": "$TEST_TRANSPORT", 00:08:25.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:25.513 "adrfam": "ipv4", 00:08:25.513 "trsvcid": "$NVMF_PORT", 00:08:25.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:25.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:25.513 "hdgst": ${hdgst:-false}, 00:08:25.513 "ddgst": ${ddgst:-false} 00:08:25.513 }, 00:08:25.513 "method": "bdev_nvme_attach_controller" 00:08:25.513 } 00:08:25.513 EOF 00:08:25.513 )") 00:08:25.513 10:50:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:25.513 10:50:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:25.513 10:50:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:25.513 10:50:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:25.513 "params": { 00:08:25.513 "name": "Nvme0", 00:08:25.513 "trtype": "rdma", 00:08:25.513 "traddr": "192.168.100.8", 00:08:25.513 "adrfam": "ipv4", 00:08:25.513 "trsvcid": "4420", 00:08:25.513 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:25.513 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:25.513 "hdgst": false, 00:08:25.513 "ddgst": false 00:08:25.513 }, 00:08:25.513 "method": "bdev_nvme_attach_controller" 00:08:25.513 }' 00:08:25.513 [2024-11-15 10:50:14.311350] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:08:25.513 [2024-11-15 10:50:14.311395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1310412 ] 00:08:25.513 [2024-11-15 10:50:14.373908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.772 [2024-11-15 10:50:14.416531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.772 Running I/O for 1 seconds... 00:08:27.150 2925.00 IOPS, 182.81 MiB/s 00:08:27.150 Latency(us) 00:08:27.150 [2024-11-15T09:50:16.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.150 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:27.150 Verification LBA range: start 0x0 length 0x400 00:08:27.150 Nvme0n1 : 1.01 2951.15 184.45 0.00 0.00 21232.99 947.42 41943.04 00:08:27.150 [2024-11-15T09:50:16.034Z] =================================================================================================================== 00:08:27.150 [2024-11-15T09:50:16.034Z] Total : 2951.15 184.45 0.00 0.00 21232.99 947.42 41943.04 00:08:27.150 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 1310167 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:08:27.150 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:27.150 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:27.150 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:27.150 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:27.150 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:27.150 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:27.150 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:27.150 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:27.150 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:27.150 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:27.150 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:27.150 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:27.150 rmmod nvme_rdma 00:08:27.150 rmmod nvme_fabrics 00:08:27.150 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:27.150 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:27.150 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:27.150 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1310117 ']' 00:08:27.150 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1310117 00:08:27.151 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 1310117 ']' 00:08:27.151 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 1310117 00:08:27.151 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:08:27.151 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:27.151 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1310117 00:08:27.151 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:27.151 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:27.151 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1310117' 00:08:27.151 killing process with pid 1310117 00:08:27.151 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 1310117 00:08:27.151 10:50:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 1310117 00:08:27.411 [2024-11-15 10:50:16.144133] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:27.411 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:27.411 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:27.411 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:27.411 00:08:27.411 real 0m9.451s 00:08:27.411 user 0m19.411s 00:08:27.411 sys 0m4.813s 00:08:27.411 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:27.411 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:27.411 ************************************ 00:08:27.411 END TEST nvmf_host_management 00:08:27.411 ************************************ 00:08:27.411 10:50:16 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:08:27.411 10:50:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:27.411 10:50:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:27.411 10:50:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:27.411 ************************************ 00:08:27.411 START TEST nvmf_lvol 00:08:27.411 ************************************ 00:08:27.411 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:08:27.671 * Looking for test storage... 00:08:27.671 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:27.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.671 --rc genhtml_branch_coverage=1 00:08:27.671 --rc genhtml_function_coverage=1 00:08:27.671 --rc genhtml_legend=1 00:08:27.671 --rc geninfo_all_blocks=1 00:08:27.671 --rc geninfo_unexecuted_blocks=1 00:08:27.671 00:08:27.671 ' 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:27.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.671 --rc genhtml_branch_coverage=1 00:08:27.671 --rc genhtml_function_coverage=1 00:08:27.671 --rc genhtml_legend=1 00:08:27.671 --rc geninfo_all_blocks=1 00:08:27.671 --rc geninfo_unexecuted_blocks=1 00:08:27.671 00:08:27.671 ' 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:27.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.671 --rc genhtml_branch_coverage=1 00:08:27.671 --rc genhtml_function_coverage=1 00:08:27.671 --rc genhtml_legend=1 00:08:27.671 --rc geninfo_all_blocks=1 00:08:27.671 --rc geninfo_unexecuted_blocks=1 00:08:27.671 00:08:27.671 ' 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:27.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.671 --rc genhtml_branch_coverage=1 00:08:27.671 --rc genhtml_function_coverage=1 00:08:27.671 --rc genhtml_legend=1 00:08:27.671 --rc geninfo_all_blocks=1 00:08:27.671 --rc geninfo_unexecuted_blocks=1 00:08:27.671 00:08:27.671 ' 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:27.671 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:27.671 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:27.672 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:27.672 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:27.672 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:27.672 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:27.672 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:27.672 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:27.672 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:27.672 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:27.672 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.672 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:27.672 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:27.672 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:27.672 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.672 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.672 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.672 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:27.672 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:27.672 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:27.672 10:50:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.951 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:08:32.952 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:08:32.952 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:08:32.952 Found net devices under 0000:af:00.0: mlx_0_0 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:08:32.952 Found net devices under 0000:af:00.1: mlx_0_1 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # rdma_device_init 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:32.952 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:32.953 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:32.953 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:08:32.953 altname enp175s0f0np0 00:08:32.953 altname ens801f0np0 00:08:32.953 inet 192.168.100.8/24 scope global mlx_0_0 00:08:32.953 valid_lft forever preferred_lft forever 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:32.953 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:32.953 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:08:32.953 altname enp175s0f1np1 00:08:32.953 altname ens801f1np1 00:08:32.953 inet 192.168.100.9/24 scope global mlx_0_1 00:08:32.953 valid_lft forever preferred_lft forever 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:32.953 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:33.212 192.168.100.9' 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # head -n 1 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:33.212 192.168.100.9' 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:33.212 192.168.100.9' 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # tail -n +2 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # head -n 1 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1313984 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1313984 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 1313984 ']' 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:33.212 10:50:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:33.212 [2024-11-15 10:50:21.980999] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:08:33.212 [2024-11-15 10:50:21.981049] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.212 [2024-11-15 10:50:22.044372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:33.212 [2024-11-15 10:50:22.085537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.212 [2024-11-15 10:50:22.085574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.213 [2024-11-15 10:50:22.085582] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.213 [2024-11-15 10:50:22.085589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.213 [2024-11-15 10:50:22.085594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.213 [2024-11-15 10:50:22.086928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.213 [2024-11-15 10:50:22.086948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.213 [2024-11-15 10:50:22.086950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.471 10:50:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:33.471 10:50:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:08:33.471 10:50:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:33.471 10:50:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:33.471 10:50:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:33.471 10:50:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.471 10:50:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:33.730 [2024-11-15 10:50:22.404914] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x94c6e0/0x950bd0) succeed. 00:08:33.730 [2024-11-15 10:50:22.414028] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x94dcd0/0x992270) succeed. 00:08:33.730 10:50:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:33.988 10:50:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:33.988 10:50:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:34.247 10:50:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:34.247 10:50:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:34.506 10:50:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:34.506 10:50:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ae2da37e-fb46-47d0-9cea-3f4f3b6fd6ea 00:08:34.506 10:50:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ae2da37e-fb46-47d0-9cea-3f4f3b6fd6ea lvol 20 00:08:34.765 10:50:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=10ebe09b-f1a9-457d-91a1-ee194070c588 00:08:34.765 10:50:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:35.023 10:50:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 10ebe09b-f1a9-457d-91a1-ee194070c588 00:08:35.281 10:50:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:35.281 [2024-11-15 10:50:24.122781] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:35.281 10:50:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:35.540 10:50:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1314444 00:08:35.540 10:50:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:35.540 10:50:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:36.914 10:50:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 10ebe09b-f1a9-457d-91a1-ee194070c588 MY_SNAPSHOT 00:08:36.914 10:50:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8cadf4bb-32c4-4d9f-bacf-9f4f8568248e 00:08:36.915 10:50:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 10ebe09b-f1a9-457d-91a1-ee194070c588 30 00:08:36.915 10:50:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8cadf4bb-32c4-4d9f-bacf-9f4f8568248e MY_CLONE 00:08:37.173 10:50:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f3452a13-354d-4f1e-a55f-aecdd9e24d7d 00:08:37.173 10:50:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f3452a13-354d-4f1e-a55f-aecdd9e24d7d 00:08:37.431 10:50:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1314444 00:08:47.405 Initializing NVMe Controllers 00:08:47.405 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:08:47.405 Controller IO queue size 128, less than required. 00:08:47.405 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:47.405 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:47.405 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:47.405 Initialization complete. Launching workers. 00:08:47.405 ======================================================== 00:08:47.405 Latency(us) 00:08:47.405 Device Information : IOPS MiB/s Average min max 00:08:47.405 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15792.90 61.69 8106.57 2049.22 41072.20 00:08:47.405 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15872.30 62.00 8065.35 3978.50 36257.97 00:08:47.405 ======================================================== 00:08:47.405 Total : 31665.20 123.69 8085.91 2049.22 41072.20 00:08:47.405 00:08:47.405 10:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:47.405 10:50:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 10ebe09b-f1a9-457d-91a1-ee194070c588 00:08:47.405 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ae2da37e-fb46-47d0-9cea-3f4f3b6fd6ea 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:47.664 rmmod nvme_rdma 00:08:47.664 rmmod nvme_fabrics 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1313984 ']' 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1313984 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 1313984 ']' 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 1313984 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1313984 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1313984' 00:08:47.664 killing process with pid 1313984 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 1313984 00:08:47.664 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 1313984 00:08:47.923 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:47.923 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:47.923 00:08:47.923 real 0m20.502s 00:08:47.923 user 1m10.436s 00:08:47.923 sys 0m5.173s 00:08:47.923 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:47.923 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:47.923 ************************************ 00:08:47.923 END TEST nvmf_lvol 00:08:47.923 ************************************ 00:08:47.923 10:50:36 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:08:47.923 10:50:36 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:47.923 10:50:36 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:47.923 10:50:36 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:48.182 ************************************ 00:08:48.182 START TEST nvmf_lvs_grow 00:08:48.182 ************************************ 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:08:48.182 * Looking for test storage... 00:08:48.182 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:48.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.182 --rc genhtml_branch_coverage=1 00:08:48.182 --rc genhtml_function_coverage=1 00:08:48.182 --rc genhtml_legend=1 00:08:48.182 --rc geninfo_all_blocks=1 00:08:48.182 --rc geninfo_unexecuted_blocks=1 00:08:48.182 00:08:48.182 ' 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:48.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.182 --rc genhtml_branch_coverage=1 00:08:48.182 --rc genhtml_function_coverage=1 00:08:48.182 --rc genhtml_legend=1 00:08:48.182 --rc geninfo_all_blocks=1 00:08:48.182 --rc geninfo_unexecuted_blocks=1 00:08:48.182 00:08:48.182 ' 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:48.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.182 --rc genhtml_branch_coverage=1 00:08:48.182 --rc genhtml_function_coverage=1 00:08:48.182 --rc genhtml_legend=1 00:08:48.182 --rc geninfo_all_blocks=1 00:08:48.182 --rc geninfo_unexecuted_blocks=1 00:08:48.182 00:08:48.182 ' 00:08:48.182 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:48.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.182 --rc genhtml_branch_coverage=1 00:08:48.182 --rc genhtml_function_coverage=1 00:08:48.182 --rc genhtml_legend=1 00:08:48.182 --rc geninfo_all_blocks=1 00:08:48.182 --rc geninfo_unexecuted_blocks=1 00:08:48.182 00:08:48.182 ' 00:08:48.183 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.183 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:48.183 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.183 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.183 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.183 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.183 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.183 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.183 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.183 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.183 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.183 10:50:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:48.183 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:48.183 10:50:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:08:54.747 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:08:54.747 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:08:54.747 Found net devices under 0000:af:00.0: mlx_0_0 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:08:54.747 Found net devices under 0000:af:00.1: mlx_0_1 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # rdma_device_init 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:54.747 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:54.748 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:54.748 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:08:54.748 altname enp175s0f0np0 00:08:54.748 altname ens801f0np0 00:08:54.748 inet 192.168.100.8/24 scope global mlx_0_0 00:08:54.748 valid_lft forever preferred_lft forever 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:54.748 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:54.748 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:08:54.748 altname enp175s0f1np1 00:08:54.748 altname ens801f1np1 00:08:54.748 inet 192.168.100.9/24 scope global mlx_0_1 00:08:54.748 valid_lft forever preferred_lft forever 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:54.748 192.168.100.9' 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:54.748 192.168.100.9' 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # head -n 1 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:54.748 192.168.100.9' 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # tail -n +2 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # head -n 1 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1319696 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1319696 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 1319696 ']' 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:54.748 [2024-11-15 10:50:42.660994] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:08:54.748 [2024-11-15 10:50:42.661049] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.748 [2024-11-15 10:50:42.725148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.748 [2024-11-15 10:50:42.765895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.748 [2024-11-15 10:50:42.765931] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.748 [2024-11-15 10:50:42.765938] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.748 [2024-11-15 10:50:42.765944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.748 [2024-11-15 10:50:42.765949] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.748 [2024-11-15 10:50:42.766583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.748 10:50:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:54.748 [2024-11-15 10:50:43.090926] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc34fd0/0xc394c0) succeed. 00:08:54.748 [2024-11-15 10:50:43.099930] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc36480/0xc7ab60) succeed. 00:08:54.748 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:54.748 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:54.748 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:54.748 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:54.748 ************************************ 00:08:54.749 START TEST lvs_grow_clean 00:08:54.749 ************************************ 00:08:54.749 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:08:54.749 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:54.749 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:54.749 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:54.749 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:54.749 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:54.749 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:54.749 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:54.749 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:54.749 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:54.749 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:54.749 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:54.749 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e1f4c797-b0ae-4d2f-b722-a17c89354bb7 00:08:54.749 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f4c797-b0ae-4d2f-b722-a17c89354bb7 00:08:54.749 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:55.007 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:55.007 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:55.007 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e1f4c797-b0ae-4d2f-b722-a17c89354bb7 lvol 150 00:08:55.265 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1b1d5204-4885-41b0-b405-ba42646024d8 00:08:55.265 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:55.265 10:50:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:55.265 [2024-11-15 10:50:44.138138] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:55.265 [2024-11-15 10:50:44.138191] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:55.265 true 00:08:55.524 10:50:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f4c797-b0ae-4d2f-b722-a17c89354bb7 00:08:55.524 10:50:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:55.524 10:50:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:55.524 10:50:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:55.782 10:50:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1b1d5204-4885-41b0-b405-ba42646024d8 00:08:56.039 10:50:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:56.039 [2024-11-15 10:50:44.908591] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:56.039 10:50:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:56.298 10:50:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1320165 00:08:56.298 10:50:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:56.298 10:50:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:56.298 10:50:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1320165 /var/tmp/bdevperf.sock 00:08:56.298 10:50:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 1320165 ']' 00:08:56.298 10:50:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:56.298 10:50:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:56.298 10:50:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:56.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:56.298 10:50:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:56.298 10:50:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:56.298 [2024-11-15 10:50:45.156766] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:08:56.298 [2024-11-15 10:50:45.156814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1320165 ] 00:08:56.556 [2024-11-15 10:50:45.220692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.556 [2024-11-15 10:50:45.263035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.556 10:50:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:56.556 10:50:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:08:56.556 10:50:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:56.814 Nvme0n1 00:08:56.814 10:50:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:57.072 [ 00:08:57.072 { 00:08:57.072 "name": "Nvme0n1", 00:08:57.072 "aliases": [ 00:08:57.072 "1b1d5204-4885-41b0-b405-ba42646024d8" 00:08:57.072 ], 00:08:57.072 "product_name": "NVMe disk", 00:08:57.072 "block_size": 4096, 00:08:57.072 "num_blocks": 38912, 00:08:57.072 "uuid": "1b1d5204-4885-41b0-b405-ba42646024d8", 00:08:57.072 "numa_id": 1, 00:08:57.072 "assigned_rate_limits": { 00:08:57.072 "rw_ios_per_sec": 0, 00:08:57.072 "rw_mbytes_per_sec": 0, 00:08:57.072 "r_mbytes_per_sec": 0, 00:08:57.072 "w_mbytes_per_sec": 0 00:08:57.072 }, 00:08:57.072 "claimed": false, 00:08:57.072 "zoned": false, 00:08:57.072 "supported_io_types": { 00:08:57.072 "read": true, 00:08:57.072 "write": true, 00:08:57.072 "unmap": true, 00:08:57.072 "flush": true, 00:08:57.072 "reset": true, 00:08:57.072 "nvme_admin": true, 00:08:57.072 "nvme_io": true, 00:08:57.072 "nvme_io_md": false, 00:08:57.072 "write_zeroes": true, 00:08:57.072 "zcopy": false, 00:08:57.072 "get_zone_info": false, 00:08:57.072 "zone_management": false, 00:08:57.072 "zone_append": false, 00:08:57.072 "compare": true, 00:08:57.072 "compare_and_write": true, 00:08:57.072 "abort": true, 00:08:57.072 "seek_hole": false, 00:08:57.072 "seek_data": false, 00:08:57.072 "copy": true, 00:08:57.072 "nvme_iov_md": false 00:08:57.072 }, 00:08:57.072 "memory_domains": [ 00:08:57.072 { 00:08:57.072 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:08:57.072 "dma_device_type": 0 00:08:57.072 } 00:08:57.072 ], 00:08:57.072 "driver_specific": { 00:08:57.072 "nvme": [ 00:08:57.072 { 00:08:57.072 "trid": { 00:08:57.072 "trtype": "RDMA", 00:08:57.072 "adrfam": "IPv4", 00:08:57.072 "traddr": "192.168.100.8", 00:08:57.072 "trsvcid": "4420", 00:08:57.072 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:57.072 }, 00:08:57.072 "ctrlr_data": { 00:08:57.072 "cntlid": 1, 00:08:57.072 "vendor_id": "0x8086", 00:08:57.072 "model_number": "SPDK bdev Controller", 00:08:57.072 "serial_number": "SPDK0", 00:08:57.072 "firmware_revision": "25.01", 00:08:57.072 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:57.072 "oacs": { 00:08:57.072 "security": 0, 00:08:57.072 "format": 0, 00:08:57.072 "firmware": 0, 00:08:57.072 "ns_manage": 0 00:08:57.072 }, 00:08:57.072 "multi_ctrlr": true, 00:08:57.072 "ana_reporting": false 00:08:57.072 }, 00:08:57.072 "vs": { 00:08:57.072 "nvme_version": "1.3" 00:08:57.072 }, 00:08:57.072 "ns_data": { 00:08:57.072 "id": 1, 00:08:57.072 "can_share": true 00:08:57.072 } 00:08:57.072 } 00:08:57.072 ], 00:08:57.072 "mp_policy": "active_passive" 00:08:57.072 } 00:08:57.072 } 00:08:57.072 ] 00:08:57.072 10:50:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1320214 00:08:57.072 10:50:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:57.072 10:50:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:57.072 Running I/O for 10 seconds... 00:08:58.452 Latency(us) 00:08:58.452 [2024-11-15T09:50:47.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.452 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.452 Nvme0n1 : 1.00 33219.00 129.76 0.00 0.00 0.00 0.00 0.00 00:08:58.452 [2024-11-15T09:50:47.336Z] =================================================================================================================== 00:08:58.452 [2024-11-15T09:50:47.336Z] Total : 33219.00 129.76 0.00 0.00 0.00 0.00 0.00 00:08:58.452 00:08:59.018 10:50:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e1f4c797-b0ae-4d2f-b722-a17c89354bb7 00:08:59.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.277 Nvme0n1 : 2.00 33569.00 131.13 0.00 0.00 0.00 0.00 0.00 00:08:59.277 [2024-11-15T09:50:48.161Z] =================================================================================================================== 00:08:59.277 [2024-11-15T09:50:48.161Z] Total : 33569.00 131.13 0.00 0.00 0.00 0.00 0.00 00:08:59.277 00:08:59.277 true 00:08:59.277 10:50:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f4c797-b0ae-4d2f-b722-a17c89354bb7 00:08:59.277 10:50:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:59.534 10:50:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:59.534 10:50:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:59.534 10:50:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1320214 00:09:00.101 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.101 Nvme0n1 : 3.00 33717.33 131.71 0.00 0.00 0.00 0.00 0.00 00:09:00.101 [2024-11-15T09:50:48.985Z] =================================================================================================================== 00:09:00.101 [2024-11-15T09:50:48.985Z] Total : 33717.33 131.71 0.00 0.00 0.00 0.00 0.00 00:09:00.101 00:09:01.476 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.476 Nvme0n1 : 4.00 33815.25 132.09 0.00 0.00 0.00 0.00 0.00 00:09:01.476 [2024-11-15T09:50:50.360Z] =================================================================================================================== 00:09:01.476 [2024-11-15T09:50:50.360Z] Total : 33815.25 132.09 0.00 0.00 0.00 0.00 0.00 00:09:01.476 00:09:02.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.043 Nvme0n1 : 5.00 33810.80 132.07 0.00 0.00 0.00 0.00 0.00 00:09:02.043 [2024-11-15T09:50:50.927Z] =================================================================================================================== 00:09:02.043 [2024-11-15T09:50:50.927Z] Total : 33810.80 132.07 0.00 0.00 0.00 0.00 0.00 00:09:02.043 00:09:03.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.418 Nvme0n1 : 6.00 33876.83 132.33 0.00 0.00 0.00 0.00 0.00 00:09:03.418 [2024-11-15T09:50:52.302Z] =================================================================================================================== 00:09:03.418 [2024-11-15T09:50:52.302Z] Total : 33876.83 132.33 0.00 0.00 0.00 0.00 0.00 00:09:03.418 00:09:04.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.353 Nvme0n1 : 7.00 33925.71 132.52 0.00 0.00 0.00 0.00 0.00 00:09:04.353 [2024-11-15T09:50:53.237Z] =================================================================================================================== 00:09:04.353 [2024-11-15T09:50:53.237Z] Total : 33925.71 132.52 0.00 0.00 0.00 0.00 0.00 00:09:04.353 00:09:05.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.400 Nvme0n1 : 8.00 33952.88 132.63 0.00 0.00 0.00 0.00 0.00 00:09:05.400 [2024-11-15T09:50:54.284Z] =================================================================================================================== 00:09:05.400 [2024-11-15T09:50:54.284Z] Total : 33952.88 132.63 0.00 0.00 0.00 0.00 0.00 00:09:05.400 00:09:06.335 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.335 Nvme0n1 : 9.00 33979.78 132.73 0.00 0.00 0.00 0.00 0.00 00:09:06.335 [2024-11-15T09:50:55.219Z] =================================================================================================================== 00:09:06.335 [2024-11-15T09:50:55.219Z] Total : 33979.78 132.73 0.00 0.00 0.00 0.00 0.00 00:09:06.335 00:09:07.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.271 Nvme0n1 : 10.00 34007.10 132.84 0.00 0.00 0.00 0.00 0.00 00:09:07.271 [2024-11-15T09:50:56.155Z] =================================================================================================================== 00:09:07.271 [2024-11-15T09:50:56.155Z] Total : 34007.10 132.84 0.00 0.00 0.00 0.00 0.00 00:09:07.271 00:09:07.271 00:09:07.271 Latency(us) 00:09:07.271 [2024-11-15T09:50:56.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.271 Nvme0n1 : 10.00 34007.57 132.84 0.00 0.00 3760.62 2820.90 13677.08 00:09:07.271 [2024-11-15T09:50:56.155Z] =================================================================================================================== 00:09:07.271 [2024-11-15T09:50:56.155Z] Total : 34007.57 132.84 0.00 0.00 3760.62 2820.90 13677.08 00:09:07.271 { 00:09:07.271 "results": [ 00:09:07.271 { 00:09:07.271 "job": "Nvme0n1", 00:09:07.271 "core_mask": "0x2", 00:09:07.271 "workload": "randwrite", 00:09:07.271 "status": "finished", 00:09:07.271 "queue_depth": 128, 00:09:07.271 "io_size": 4096, 00:09:07.271 "runtime": 10.003626, 00:09:07.271 "iops": 34007.56885553298, 00:09:07.271 "mibps": 132.84206584192572, 00:09:07.271 "io_failed": 0, 00:09:07.271 "io_timeout": 0, 00:09:07.271 "avg_latency_us": 3760.6225796078174, 00:09:07.271 "min_latency_us": 2820.897391304348, 00:09:07.271 "max_latency_us": 13677.078260869564 00:09:07.271 } 00:09:07.271 ], 00:09:07.271 "core_count": 1 00:09:07.271 } 00:09:07.271 10:50:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1320165 00:09:07.271 10:50:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 1320165 ']' 00:09:07.271 10:50:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 1320165 00:09:07.271 10:50:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:09:07.271 10:50:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:07.271 10:50:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1320165 00:09:07.271 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:07.271 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:07.271 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1320165' 00:09:07.271 killing process with pid 1320165 00:09:07.271 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 1320165 00:09:07.271 Received shutdown signal, test time was about 10.000000 seconds 00:09:07.271 00:09:07.271 Latency(us) 00:09:07.271 [2024-11-15T09:50:56.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.271 [2024-11-15T09:50:56.155Z] =================================================================================================================== 00:09:07.271 [2024-11-15T09:50:56.155Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:07.271 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 1320165 00:09:07.530 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:07.530 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:07.788 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f4c797-b0ae-4d2f-b722-a17c89354bb7 00:09:07.788 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:08.047 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:08.047 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:08.047 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:08.306 [2024-11-15 10:50:56.947084] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:08.306 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f4c797-b0ae-4d2f-b722-a17c89354bb7 00:09:08.306 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:08.306 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f4c797-b0ae-4d2f-b722-a17c89354bb7 00:09:08.306 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:08.306 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:08.306 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:08.306 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:08.306 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:08.306 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:08.306 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:08.306 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:09:08.306 10:50:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f4c797-b0ae-4d2f-b722-a17c89354bb7 00:09:08.306 request: 00:09:08.306 { 00:09:08.306 "uuid": "e1f4c797-b0ae-4d2f-b722-a17c89354bb7", 00:09:08.306 "method": "bdev_lvol_get_lvstores", 00:09:08.306 "req_id": 1 00:09:08.306 } 00:09:08.306 Got JSON-RPC error response 00:09:08.306 response: 00:09:08.306 { 00:09:08.306 "code": -19, 00:09:08.306 "message": "No such device" 00:09:08.306 } 00:09:08.306 10:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:08.306 10:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:08.306 10:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:08.306 10:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:08.306 10:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:08.565 aio_bdev 00:09:08.565 10:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1b1d5204-4885-41b0-b405-ba42646024d8 00:09:08.565 10:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=1b1d5204-4885-41b0-b405-ba42646024d8 00:09:08.565 10:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:08.565 10:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:09:08.565 10:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:08.565 10:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:08.565 10:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:08.823 10:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1b1d5204-4885-41b0-b405-ba42646024d8 -t 2000 00:09:09.082 [ 00:09:09.082 { 00:09:09.082 "name": "1b1d5204-4885-41b0-b405-ba42646024d8", 00:09:09.082 "aliases": [ 00:09:09.082 "lvs/lvol" 00:09:09.082 ], 00:09:09.082 "product_name": "Logical Volume", 00:09:09.082 "block_size": 4096, 00:09:09.082 "num_blocks": 38912, 00:09:09.082 "uuid": "1b1d5204-4885-41b0-b405-ba42646024d8", 00:09:09.082 "assigned_rate_limits": { 00:09:09.082 "rw_ios_per_sec": 0, 00:09:09.082 "rw_mbytes_per_sec": 0, 00:09:09.082 "r_mbytes_per_sec": 0, 00:09:09.082 "w_mbytes_per_sec": 0 00:09:09.082 }, 00:09:09.082 "claimed": false, 00:09:09.082 "zoned": false, 00:09:09.082 "supported_io_types": { 00:09:09.082 "read": true, 00:09:09.082 "write": true, 00:09:09.082 "unmap": true, 00:09:09.082 "flush": false, 00:09:09.082 "reset": true, 00:09:09.082 "nvme_admin": false, 00:09:09.082 "nvme_io": false, 00:09:09.082 "nvme_io_md": false, 00:09:09.082 "write_zeroes": true, 00:09:09.082 "zcopy": false, 00:09:09.082 "get_zone_info": false, 00:09:09.082 "zone_management": false, 00:09:09.082 "zone_append": false, 00:09:09.082 "compare": false, 00:09:09.082 "compare_and_write": false, 00:09:09.082 "abort": false, 00:09:09.082 "seek_hole": true, 00:09:09.082 "seek_data": true, 00:09:09.082 "copy": false, 00:09:09.082 "nvme_iov_md": false 00:09:09.082 }, 00:09:09.082 "driver_specific": { 00:09:09.082 "lvol": { 00:09:09.082 "lvol_store_uuid": "e1f4c797-b0ae-4d2f-b722-a17c89354bb7", 00:09:09.082 "base_bdev": "aio_bdev", 00:09:09.082 "thin_provision": false, 00:09:09.082 "num_allocated_clusters": 38, 00:09:09.082 "snapshot": false, 00:09:09.082 "clone": false, 00:09:09.082 "esnap_clone": false 00:09:09.082 } 00:09:09.082 } 00:09:09.082 } 00:09:09.082 ] 00:09:09.082 10:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:09:09.082 10:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f4c797-b0ae-4d2f-b722-a17c89354bb7 00:09:09.082 10:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:09.082 10:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:09.082 10:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f4c797-b0ae-4d2f-b722-a17c89354bb7 00:09:09.082 10:50:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:09.341 10:50:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:09.341 10:50:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1b1d5204-4885-41b0-b405-ba42646024d8 00:09:09.600 10:50:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e1f4c797-b0ae-4d2f-b722-a17c89354bb7 00:09:09.858 10:50:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:09.858 10:50:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:10.116 00:09:10.116 real 0m15.566s 00:09:10.116 user 0m15.621s 00:09:10.116 sys 0m0.968s 00:09:10.116 10:50:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:10.116 10:50:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:10.116 ************************************ 00:09:10.116 END TEST lvs_grow_clean 00:09:10.116 ************************************ 00:09:10.116 10:50:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:10.116 10:50:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:10.116 10:50:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:10.116 10:50:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:10.116 ************************************ 00:09:10.116 START TEST lvs_grow_dirty 00:09:10.116 ************************************ 00:09:10.116 10:50:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:09:10.116 10:50:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:10.116 10:50:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:10.116 10:50:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:10.116 10:50:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:10.116 10:50:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:10.116 10:50:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:10.116 10:50:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:10.116 10:50:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:10.116 10:50:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:10.375 10:50:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:10.375 10:50:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:10.375 10:50:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8a8650f1-38db-480c-9f84-16e756e1a7c8 00:09:10.375 10:50:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a8650f1-38db-480c-9f84-16e756e1a7c8 00:09:10.375 10:50:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:10.633 10:50:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:10.633 10:50:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:10.633 10:50:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8a8650f1-38db-480c-9f84-16e756e1a7c8 lvol 150 00:09:10.891 10:50:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f4986a05-edb8-418e-a5dd-783b306976d0 00:09:10.892 10:50:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:10.892 10:50:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:11.149 [2024-11-15 10:50:59.805817] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:11.149 [2024-11-15 10:50:59.805865] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:11.149 true 00:09:11.149 10:50:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a8650f1-38db-480c-9f84-16e756e1a7c8 00:09:11.149 10:50:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:11.149 10:51:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:11.149 10:51:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:11.407 10:51:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f4986a05-edb8-418e-a5dd-783b306976d0 00:09:11.665 10:51:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:11.925 [2024-11-15 10:51:00.572256] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:11.925 10:51:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:11.925 10:51:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1322897 00:09:11.925 10:51:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:11.925 10:51:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:11.925 10:51:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1322897 /var/tmp/bdevperf.sock 00:09:11.925 10:51:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1322897 ']' 00:09:11.925 10:51:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:11.925 10:51:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:11.925 10:51:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:11.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:11.925 10:51:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:11.925 10:51:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:12.184 [2024-11-15 10:51:00.823303] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:09:12.184 [2024-11-15 10:51:00.823349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1322897 ] 00:09:12.184 [2024-11-15 10:51:00.885514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.184 [2024-11-15 10:51:00.925793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.184 10:51:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:12.184 10:51:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:09:12.184 10:51:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:12.442 Nvme0n1 00:09:12.442 10:51:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:12.701 [ 00:09:12.701 { 00:09:12.701 "name": "Nvme0n1", 00:09:12.701 "aliases": [ 00:09:12.701 "f4986a05-edb8-418e-a5dd-783b306976d0" 00:09:12.701 ], 00:09:12.701 "product_name": "NVMe disk", 00:09:12.701 "block_size": 4096, 00:09:12.701 "num_blocks": 38912, 00:09:12.701 "uuid": "f4986a05-edb8-418e-a5dd-783b306976d0", 00:09:12.701 "numa_id": 1, 00:09:12.701 "assigned_rate_limits": { 00:09:12.701 "rw_ios_per_sec": 0, 00:09:12.701 "rw_mbytes_per_sec": 0, 00:09:12.701 "r_mbytes_per_sec": 0, 00:09:12.701 "w_mbytes_per_sec": 0 00:09:12.701 }, 00:09:12.701 "claimed": false, 00:09:12.701 "zoned": false, 00:09:12.701 "supported_io_types": { 00:09:12.701 "read": true, 00:09:12.701 "write": true, 00:09:12.701 "unmap": true, 00:09:12.701 "flush": true, 00:09:12.701 "reset": true, 00:09:12.701 "nvme_admin": true, 00:09:12.701 "nvme_io": true, 00:09:12.701 "nvme_io_md": false, 00:09:12.701 "write_zeroes": true, 00:09:12.701 "zcopy": false, 00:09:12.701 "get_zone_info": false, 00:09:12.701 "zone_management": false, 00:09:12.701 "zone_append": false, 00:09:12.701 "compare": true, 00:09:12.701 "compare_and_write": true, 00:09:12.701 "abort": true, 00:09:12.701 "seek_hole": false, 00:09:12.701 "seek_data": false, 00:09:12.701 "copy": true, 00:09:12.701 "nvme_iov_md": false 00:09:12.701 }, 00:09:12.701 "memory_domains": [ 00:09:12.701 { 00:09:12.701 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:09:12.701 "dma_device_type": 0 00:09:12.701 } 00:09:12.701 ], 00:09:12.701 "driver_specific": { 00:09:12.701 "nvme": [ 00:09:12.701 { 00:09:12.701 "trid": { 00:09:12.701 "trtype": "RDMA", 00:09:12.701 "adrfam": "IPv4", 00:09:12.701 "traddr": "192.168.100.8", 00:09:12.701 "trsvcid": "4420", 00:09:12.701 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:12.701 }, 00:09:12.701 "ctrlr_data": { 00:09:12.701 "cntlid": 1, 00:09:12.701 "vendor_id": "0x8086", 00:09:12.701 "model_number": "SPDK bdev Controller", 00:09:12.701 "serial_number": "SPDK0", 00:09:12.701 "firmware_revision": "25.01", 00:09:12.701 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:12.701 "oacs": { 00:09:12.701 "security": 0, 00:09:12.701 "format": 0, 00:09:12.701 "firmware": 0, 00:09:12.701 "ns_manage": 0 00:09:12.701 }, 00:09:12.701 "multi_ctrlr": true, 00:09:12.701 "ana_reporting": false 00:09:12.701 }, 00:09:12.701 "vs": { 00:09:12.701 "nvme_version": "1.3" 00:09:12.701 }, 00:09:12.701 "ns_data": { 00:09:12.701 "id": 1, 00:09:12.701 "can_share": true 00:09:12.701 } 00:09:12.701 } 00:09:12.701 ], 00:09:12.701 "mp_policy": "active_passive" 00:09:12.701 } 00:09:12.701 } 00:09:12.701 ] 00:09:12.701 10:51:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:12.701 10:51:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1322967 00:09:12.702 10:51:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:12.702 Running I/O for 10 seconds... 00:09:14.078 Latency(us) 00:09:14.078 [2024-11-15T09:51:02.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.078 Nvme0n1 : 1.00 33056.00 129.12 0.00 0.00 0.00 0.00 0.00 00:09:14.078 [2024-11-15T09:51:02.962Z] =================================================================================================================== 00:09:14.078 [2024-11-15T09:51:02.962Z] Total : 33056.00 129.12 0.00 0.00 0.00 0.00 0.00 00:09:14.078 00:09:14.643 10:51:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8a8650f1-38db-480c-9f84-16e756e1a7c8 00:09:14.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.901 Nvme0n1 : 2.00 33440.50 130.63 0.00 0.00 0.00 0.00 0.00 00:09:14.901 [2024-11-15T09:51:03.785Z] =================================================================================================================== 00:09:14.901 [2024-11-15T09:51:03.785Z] Total : 33440.50 130.63 0.00 0.00 0.00 0.00 0.00 00:09:14.901 00:09:14.901 true 00:09:14.901 10:51:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a8650f1-38db-480c-9f84-16e756e1a7c8 00:09:14.901 10:51:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:15.159 10:51:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:15.159 10:51:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:15.159 10:51:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1322967 00:09:15.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.727 Nvme0n1 : 3.00 33611.00 131.29 0.00 0.00 0.00 0.00 0.00 00:09:15.727 [2024-11-15T09:51:04.611Z] =================================================================================================================== 00:09:15.727 [2024-11-15T09:51:04.611Z] Total : 33611.00 131.29 0.00 0.00 0.00 0.00 0.00 00:09:15.727 00:09:17.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.102 Nvme0n1 : 4.00 33744.25 131.81 0.00 0.00 0.00 0.00 0.00 00:09:17.102 [2024-11-15T09:51:05.986Z] =================================================================================================================== 00:09:17.102 [2024-11-15T09:51:05.986Z] Total : 33744.25 131.81 0.00 0.00 0.00 0.00 0.00 00:09:17.102 00:09:18.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.037 Nvme0n1 : 5.00 33824.40 132.13 0.00 0.00 0.00 0.00 0.00 00:09:18.037 [2024-11-15T09:51:06.921Z] =================================================================================================================== 00:09:18.037 [2024-11-15T09:51:06.921Z] Total : 33824.40 132.13 0.00 0.00 0.00 0.00 0.00 00:09:18.037 00:09:18.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.972 Nvme0n1 : 6.00 33871.83 132.31 0.00 0.00 0.00 0.00 0.00 00:09:18.972 [2024-11-15T09:51:07.856Z] =================================================================================================================== 00:09:18.972 [2024-11-15T09:51:07.856Z] Total : 33871.83 132.31 0.00 0.00 0.00 0.00 0.00 00:09:18.972 00:09:19.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.907 Nvme0n1 : 7.00 33905.71 132.44 0.00 0.00 0.00 0.00 0.00 00:09:19.907 [2024-11-15T09:51:08.791Z] =================================================================================================================== 00:09:19.907 [2024-11-15T09:51:08.791Z] Total : 33905.71 132.44 0.00 0.00 0.00 0.00 0.00 00:09:19.907 00:09:20.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.843 Nvme0n1 : 8.00 33920.62 132.50 0.00 0.00 0.00 0.00 0.00 00:09:20.843 [2024-11-15T09:51:09.727Z] =================================================================================================================== 00:09:20.843 [2024-11-15T09:51:09.727Z] Total : 33920.62 132.50 0.00 0.00 0.00 0.00 0.00 00:09:20.843 00:09:21.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.779 Nvme0n1 : 9.00 33916.22 132.49 0.00 0.00 0.00 0.00 0.00 00:09:21.779 [2024-11-15T09:51:10.663Z] =================================================================================================================== 00:09:21.779 [2024-11-15T09:51:10.663Z] Total : 33916.22 132.49 0.00 0.00 0.00 0.00 0.00 00:09:21.779 00:09:23.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.154 Nvme0n1 : 10.00 33907.90 132.45 0.00 0.00 0.00 0.00 0.00 00:09:23.154 [2024-11-15T09:51:12.038Z] =================================================================================================================== 00:09:23.154 [2024-11-15T09:51:12.038Z] Total : 33907.90 132.45 0.00 0.00 0.00 0.00 0.00 00:09:23.154 00:09:23.154 00:09:23.154 Latency(us) 00:09:23.154 [2024-11-15T09:51:12.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.154 Nvme0n1 : 10.00 33908.10 132.45 0.00 0.00 3771.79 2721.17 14531.90 00:09:23.154 [2024-11-15T09:51:12.038Z] =================================================================================================================== 00:09:23.154 [2024-11-15T09:51:12.038Z] Total : 33908.10 132.45 0.00 0.00 3771.79 2721.17 14531.90 00:09:23.154 { 00:09:23.154 "results": [ 00:09:23.154 { 00:09:23.154 "job": "Nvme0n1", 00:09:23.154 "core_mask": "0x2", 00:09:23.154 "workload": "randwrite", 00:09:23.154 "status": "finished", 00:09:23.154 "queue_depth": 128, 00:09:23.154 "io_size": 4096, 00:09:23.154 "runtime": 10.003302, 00:09:23.154 "iops": 33908.103544209705, 00:09:23.154 "mibps": 132.45352946956916, 00:09:23.154 "io_failed": 0, 00:09:23.154 "io_timeout": 0, 00:09:23.154 "avg_latency_us": 3771.79280932146, 00:09:23.154 "min_latency_us": 2721.168695652174, 00:09:23.154 "max_latency_us": 14531.895652173913 00:09:23.154 } 00:09:23.154 ], 00:09:23.154 "core_count": 1 00:09:23.154 } 00:09:23.154 10:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1322897 00:09:23.154 10:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 1322897 ']' 00:09:23.154 10:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 1322897 00:09:23.154 10:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:09:23.154 10:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:23.154 10:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1322897 00:09:23.155 10:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:23.155 10:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:23.155 10:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1322897' 00:09:23.155 killing process with pid 1322897 00:09:23.155 10:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 1322897 00:09:23.155 Received shutdown signal, test time was about 10.000000 seconds 00:09:23.155 00:09:23.155 Latency(us) 00:09:23.155 [2024-11-15T09:51:12.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.155 [2024-11-15T09:51:12.039Z] =================================================================================================================== 00:09:23.155 [2024-11-15T09:51:12.039Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:23.155 10:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 1322897 00:09:23.155 10:51:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:23.413 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:23.413 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a8650f1-38db-480c-9f84-16e756e1a7c8 00:09:23.413 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:23.672 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:23.672 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:23.672 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1319696 00:09:23.672 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1319696 00:09:23.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1319696 Killed "${NVMF_APP[@]}" "$@" 00:09:23.672 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:23.672 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:23.672 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:23.672 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:23.672 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:23.672 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:23.672 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1325259 00:09:23.672 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1325259 00:09:23.672 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1325259 ']' 00:09:23.672 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.673 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:23.673 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.673 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:23.673 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:23.673 [2024-11-15 10:51:12.520251] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:09:23.673 [2024-11-15 10:51:12.520298] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.933 [2024-11-15 10:51:12.584586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.933 [2024-11-15 10:51:12.627089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.933 [2024-11-15 10:51:12.627124] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.933 [2024-11-15 10:51:12.627132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.933 [2024-11-15 10:51:12.627138] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.933 [2024-11-15 10:51:12.627144] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.933 [2024-11-15 10:51:12.627727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.933 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:23.933 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:09:23.933 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:23.933 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:23.933 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:23.933 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.933 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:24.191 [2024-11-15 10:51:12.938497] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:24.191 [2024-11-15 10:51:12.938593] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:24.191 [2024-11-15 10:51:12.938619] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:24.191 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:24.191 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f4986a05-edb8-418e-a5dd-783b306976d0 00:09:24.191 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=f4986a05-edb8-418e-a5dd-783b306976d0 00:09:24.191 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:24.191 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:09:24.191 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:24.191 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:24.191 10:51:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:24.450 10:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f4986a05-edb8-418e-a5dd-783b306976d0 -t 2000 00:09:24.450 [ 00:09:24.450 { 00:09:24.450 "name": "f4986a05-edb8-418e-a5dd-783b306976d0", 00:09:24.450 "aliases": [ 00:09:24.450 "lvs/lvol" 00:09:24.450 ], 00:09:24.450 "product_name": "Logical Volume", 00:09:24.450 "block_size": 4096, 00:09:24.450 "num_blocks": 38912, 00:09:24.450 "uuid": "f4986a05-edb8-418e-a5dd-783b306976d0", 00:09:24.450 "assigned_rate_limits": { 00:09:24.450 "rw_ios_per_sec": 0, 00:09:24.450 "rw_mbytes_per_sec": 0, 00:09:24.450 "r_mbytes_per_sec": 0, 00:09:24.450 "w_mbytes_per_sec": 0 00:09:24.450 }, 00:09:24.450 "claimed": false, 00:09:24.450 "zoned": false, 00:09:24.450 "supported_io_types": { 00:09:24.450 "read": true, 00:09:24.450 "write": true, 00:09:24.450 "unmap": true, 00:09:24.450 "flush": false, 00:09:24.450 "reset": true, 00:09:24.450 "nvme_admin": false, 00:09:24.450 "nvme_io": false, 00:09:24.450 "nvme_io_md": false, 00:09:24.450 "write_zeroes": true, 00:09:24.450 "zcopy": false, 00:09:24.450 "get_zone_info": false, 00:09:24.450 "zone_management": false, 00:09:24.450 "zone_append": false, 00:09:24.450 "compare": false, 00:09:24.450 "compare_and_write": false, 00:09:24.450 "abort": false, 00:09:24.450 "seek_hole": true, 00:09:24.450 "seek_data": true, 00:09:24.450 "copy": false, 00:09:24.450 "nvme_iov_md": false 00:09:24.450 }, 00:09:24.450 "driver_specific": { 00:09:24.450 "lvol": { 00:09:24.450 "lvol_store_uuid": "8a8650f1-38db-480c-9f84-16e756e1a7c8", 00:09:24.450 "base_bdev": "aio_bdev", 00:09:24.450 "thin_provision": false, 00:09:24.450 "num_allocated_clusters": 38, 00:09:24.450 "snapshot": false, 00:09:24.450 "clone": false, 00:09:24.450 "esnap_clone": false 00:09:24.450 } 00:09:24.450 } 00:09:24.450 } 00:09:24.450 ] 00:09:24.450 10:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:09:24.708 10:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a8650f1-38db-480c-9f84-16e756e1a7c8 00:09:24.708 10:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:24.708 10:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:24.708 10:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a8650f1-38db-480c-9f84-16e756e1a7c8 00:09:24.708 10:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:24.966 10:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:24.966 10:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:25.225 [2024-11-15 10:51:13.879575] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:25.225 10:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a8650f1-38db-480c-9f84-16e756e1a7c8 00:09:25.225 10:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:25.225 10:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a8650f1-38db-480c-9f84-16e756e1a7c8 00:09:25.225 10:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:25.225 10:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.225 10:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:25.225 10:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.225 10:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:25.225 10:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.225 10:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:25.225 10:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:09:25.225 10:51:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a8650f1-38db-480c-9f84-16e756e1a7c8 00:09:25.225 request: 00:09:25.225 { 00:09:25.225 "uuid": "8a8650f1-38db-480c-9f84-16e756e1a7c8", 00:09:25.225 "method": "bdev_lvol_get_lvstores", 00:09:25.225 "req_id": 1 00:09:25.225 } 00:09:25.225 Got JSON-RPC error response 00:09:25.225 response: 00:09:25.225 { 00:09:25.225 "code": -19, 00:09:25.225 "message": "No such device" 00:09:25.225 } 00:09:25.484 10:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:25.484 10:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:25.484 10:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:25.484 10:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:25.484 10:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:25.484 aio_bdev 00:09:25.484 10:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f4986a05-edb8-418e-a5dd-783b306976d0 00:09:25.484 10:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=f4986a05-edb8-418e-a5dd-783b306976d0 00:09:25.484 10:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:25.484 10:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:09:25.484 10:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:25.484 10:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:25.484 10:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:25.742 10:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f4986a05-edb8-418e-a5dd-783b306976d0 -t 2000 00:09:26.004 [ 00:09:26.004 { 00:09:26.004 "name": "f4986a05-edb8-418e-a5dd-783b306976d0", 00:09:26.004 "aliases": [ 00:09:26.004 "lvs/lvol" 00:09:26.004 ], 00:09:26.004 "product_name": "Logical Volume", 00:09:26.004 "block_size": 4096, 00:09:26.004 "num_blocks": 38912, 00:09:26.004 "uuid": "f4986a05-edb8-418e-a5dd-783b306976d0", 00:09:26.004 "assigned_rate_limits": { 00:09:26.004 "rw_ios_per_sec": 0, 00:09:26.004 "rw_mbytes_per_sec": 0, 00:09:26.004 "r_mbytes_per_sec": 0, 00:09:26.004 "w_mbytes_per_sec": 0 00:09:26.004 }, 00:09:26.004 "claimed": false, 00:09:26.004 "zoned": false, 00:09:26.004 "supported_io_types": { 00:09:26.004 "read": true, 00:09:26.004 "write": true, 00:09:26.004 "unmap": true, 00:09:26.004 "flush": false, 00:09:26.004 "reset": true, 00:09:26.004 "nvme_admin": false, 00:09:26.004 "nvme_io": false, 00:09:26.004 "nvme_io_md": false, 00:09:26.004 "write_zeroes": true, 00:09:26.004 "zcopy": false, 00:09:26.004 "get_zone_info": false, 00:09:26.004 "zone_management": false, 00:09:26.004 "zone_append": false, 00:09:26.004 "compare": false, 00:09:26.004 "compare_and_write": false, 00:09:26.004 "abort": false, 00:09:26.004 "seek_hole": true, 00:09:26.004 "seek_data": true, 00:09:26.004 "copy": false, 00:09:26.004 "nvme_iov_md": false 00:09:26.004 }, 00:09:26.004 "driver_specific": { 00:09:26.004 "lvol": { 00:09:26.004 "lvol_store_uuid": "8a8650f1-38db-480c-9f84-16e756e1a7c8", 00:09:26.004 "base_bdev": "aio_bdev", 00:09:26.004 "thin_provision": false, 00:09:26.004 "num_allocated_clusters": 38, 00:09:26.004 "snapshot": false, 00:09:26.004 "clone": false, 00:09:26.004 "esnap_clone": false 00:09:26.004 } 00:09:26.004 } 00:09:26.004 } 00:09:26.004 ] 00:09:26.004 10:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:09:26.004 10:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a8650f1-38db-480c-9f84-16e756e1a7c8 00:09:26.004 10:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:26.004 10:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:26.004 10:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a8650f1-38db-480c-9f84-16e756e1a7c8 00:09:26.004 10:51:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:26.263 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:26.263 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f4986a05-edb8-418e-a5dd-783b306976d0 00:09:26.521 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8a8650f1-38db-480c-9f84-16e756e1a7c8 00:09:26.780 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:26.780 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:27.038 00:09:27.038 real 0m16.868s 00:09:27.038 user 0m44.533s 00:09:27.038 sys 0m2.841s 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:27.038 ************************************ 00:09:27.038 END TEST lvs_grow_dirty 00:09:27.038 ************************************ 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:27.038 nvmf_trace.0 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:27.038 rmmod nvme_rdma 00:09:27.038 rmmod nvme_fabrics 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1325259 ']' 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1325259 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 1325259 ']' 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 1325259 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1325259 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1325259' 00:09:27.038 killing process with pid 1325259 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 1325259 00:09:27.038 10:51:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 1325259 00:09:27.297 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:27.297 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:27.297 00:09:27.297 real 0m39.215s 00:09:27.297 user 1m5.689s 00:09:27.297 sys 0m8.363s 00:09:27.297 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:27.297 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:27.297 ************************************ 00:09:27.297 END TEST nvmf_lvs_grow 00:09:27.297 ************************************ 00:09:27.297 10:51:16 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:09:27.297 10:51:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:27.297 10:51:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:27.297 10:51:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:27.297 ************************************ 00:09:27.297 START TEST nvmf_bdev_io_wait 00:09:27.297 ************************************ 00:09:27.297 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:09:27.297 * Looking for test storage... 00:09:27.297 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:27.297 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:27.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.557 --rc genhtml_branch_coverage=1 00:09:27.557 --rc genhtml_function_coverage=1 00:09:27.557 --rc genhtml_legend=1 00:09:27.557 --rc geninfo_all_blocks=1 00:09:27.557 --rc geninfo_unexecuted_blocks=1 00:09:27.557 00:09:27.557 ' 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:27.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.557 --rc genhtml_branch_coverage=1 00:09:27.557 --rc genhtml_function_coverage=1 00:09:27.557 --rc genhtml_legend=1 00:09:27.557 --rc geninfo_all_blocks=1 00:09:27.557 --rc geninfo_unexecuted_blocks=1 00:09:27.557 00:09:27.557 ' 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:27.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.557 --rc genhtml_branch_coverage=1 00:09:27.557 --rc genhtml_function_coverage=1 00:09:27.557 --rc genhtml_legend=1 00:09:27.557 --rc geninfo_all_blocks=1 00:09:27.557 --rc geninfo_unexecuted_blocks=1 00:09:27.557 00:09:27.557 ' 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:27.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.557 --rc genhtml_branch_coverage=1 00:09:27.557 --rc genhtml_function_coverage=1 00:09:27.557 --rc genhtml_legend=1 00:09:27.557 --rc geninfo_all_blocks=1 00:09:27.557 --rc geninfo_unexecuted_blocks=1 00:09:27.557 00:09:27.557 ' 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.557 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:27.558 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:27.558 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:27.558 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:27.558 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:27.558 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:27.558 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:27.558 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:27.558 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:27.558 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.558 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:27.558 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:27.558 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:27.558 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.558 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.558 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.558 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:27.558 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:27.558 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:27.558 10:51:16 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.123 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:09:34.124 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:09:34.124 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:09:34.124 Found net devices under 0000:af:00.0: mlx_0_0 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:09:34.124 Found net devices under 0000:af:00.1: mlx_0_1 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # rdma_device_init 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:34.124 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:34.124 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:09:34.124 altname enp175s0f0np0 00:09:34.124 altname ens801f0np0 00:09:34.124 inet 192.168.100.8/24 scope global mlx_0_0 00:09:34.124 valid_lft forever preferred_lft forever 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:34.124 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:34.124 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:34.124 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:09:34.124 altname enp175s0f1np1 00:09:34.124 altname ens801f1np1 00:09:34.124 inet 192.168.100.9/24 scope global mlx_0_1 00:09:34.124 valid_lft forever preferred_lft forever 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:34.125 192.168.100.9' 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # head -n 1 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:34.125 192.168.100.9' 00:09:34.125 10:51:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # tail -n +2 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:34.125 192.168.100.9' 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # head -n 1 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1329129 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1329129 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 1329129 ']' 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.125 [2024-11-15 10:51:22.087000] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:09:34.125 [2024-11-15 10:51:22.087044] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.125 [2024-11-15 10:51:22.150525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.125 [2024-11-15 10:51:22.195067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.125 [2024-11-15 10:51:22.195104] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.125 [2024-11-15 10:51:22.195112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.125 [2024-11-15 10:51:22.195117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.125 [2024-11-15 10:51:22.195122] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.125 [2024-11-15 10:51:22.196708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.125 [2024-11-15 10:51:22.196795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.125 [2024-11-15 10:51:22.196946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.125 [2024-11-15 10:51:22.196948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.125 [2024-11-15 10:51:22.369354] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x617290/0x61b780) succeed. 00:09:34.125 [2024-11-15 10:51:22.378355] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x618920/0x65ce20) succeed. 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.125 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.126 Malloc0 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.126 [2024-11-15 10:51:22.556383] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1329312 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1329315 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.126 { 00:09:34.126 "params": { 00:09:34.126 "name": "Nvme$subsystem", 00:09:34.126 "trtype": "$TEST_TRANSPORT", 00:09:34.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.126 "adrfam": "ipv4", 00:09:34.126 "trsvcid": "$NVMF_PORT", 00:09:34.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.126 "hdgst": ${hdgst:-false}, 00:09:34.126 "ddgst": ${ddgst:-false} 00:09:34.126 }, 00:09:34.126 "method": "bdev_nvme_attach_controller" 00:09:34.126 } 00:09:34.126 EOF 00:09:34.126 )") 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1329318 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.126 { 00:09:34.126 "params": { 00:09:34.126 "name": "Nvme$subsystem", 00:09:34.126 "trtype": "$TEST_TRANSPORT", 00:09:34.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.126 "adrfam": "ipv4", 00:09:34.126 "trsvcid": "$NVMF_PORT", 00:09:34.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.126 "hdgst": ${hdgst:-false}, 00:09:34.126 "ddgst": ${ddgst:-false} 00:09:34.126 }, 00:09:34.126 "method": "bdev_nvme_attach_controller" 00:09:34.126 } 00:09:34.126 EOF 00:09:34.126 )") 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1329322 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.126 { 00:09:34.126 "params": { 00:09:34.126 "name": "Nvme$subsystem", 00:09:34.126 "trtype": "$TEST_TRANSPORT", 00:09:34.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.126 "adrfam": "ipv4", 00:09:34.126 "trsvcid": "$NVMF_PORT", 00:09:34.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.126 "hdgst": ${hdgst:-false}, 00:09:34.126 "ddgst": ${ddgst:-false} 00:09:34.126 }, 00:09:34.126 "method": "bdev_nvme_attach_controller" 00:09:34.126 } 00:09:34.126 EOF 00:09:34.126 )") 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.126 { 00:09:34.126 "params": { 00:09:34.126 "name": "Nvme$subsystem", 00:09:34.126 "trtype": "$TEST_TRANSPORT", 00:09:34.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.126 "adrfam": "ipv4", 00:09:34.126 "trsvcid": "$NVMF_PORT", 00:09:34.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.126 "hdgst": ${hdgst:-false}, 00:09:34.126 "ddgst": ${ddgst:-false} 00:09:34.126 }, 00:09:34.126 "method": "bdev_nvme_attach_controller" 00:09:34.126 } 00:09:34.126 EOF 00:09:34.126 )") 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1329312 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.126 "params": { 00:09:34.126 "name": "Nvme1", 00:09:34.126 "trtype": "rdma", 00:09:34.126 "traddr": "192.168.100.8", 00:09:34.126 "adrfam": "ipv4", 00:09:34.126 "trsvcid": "4420", 00:09:34.126 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.126 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.126 "hdgst": false, 00:09:34.126 "ddgst": false 00:09:34.126 }, 00:09:34.126 "method": "bdev_nvme_attach_controller" 00:09:34.126 }' 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.126 "params": { 00:09:34.126 "name": "Nvme1", 00:09:34.126 "trtype": "rdma", 00:09:34.126 "traddr": "192.168.100.8", 00:09:34.126 "adrfam": "ipv4", 00:09:34.126 "trsvcid": "4420", 00:09:34.126 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.126 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.126 "hdgst": false, 00:09:34.126 "ddgst": false 00:09:34.126 }, 00:09:34.126 "method": "bdev_nvme_attach_controller" 00:09:34.126 }' 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:34.126 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.126 "params": { 00:09:34.126 "name": "Nvme1", 00:09:34.126 "trtype": "rdma", 00:09:34.126 "traddr": "192.168.100.8", 00:09:34.126 "adrfam": "ipv4", 00:09:34.126 "trsvcid": "4420", 00:09:34.126 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.126 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.126 "hdgst": false, 00:09:34.126 "ddgst": false 00:09:34.126 }, 00:09:34.126 "method": "bdev_nvme_attach_controller" 00:09:34.126 }' 00:09:34.127 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:34.127 10:51:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.127 "params": { 00:09:34.127 "name": "Nvme1", 00:09:34.127 "trtype": "rdma", 00:09:34.127 "traddr": "192.168.100.8", 00:09:34.127 "adrfam": "ipv4", 00:09:34.127 "trsvcid": "4420", 00:09:34.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.127 "hdgst": false, 00:09:34.127 "ddgst": false 00:09:34.127 }, 00:09:34.127 "method": "bdev_nvme_attach_controller" 00:09:34.127 }' 00:09:34.127 [2024-11-15 10:51:22.605920] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:09:34.127 [2024-11-15 10:51:22.605968] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:34.127 [2024-11-15 10:51:22.606440] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:09:34.127 [2024-11-15 10:51:22.606490] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:34.127 [2024-11-15 10:51:22.609275] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:09:34.127 [2024-11-15 10:51:22.609319] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:34.127 [2024-11-15 10:51:22.611773] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:09:34.127 [2024-11-15 10:51:22.611816] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:34.127 [2024-11-15 10:51:22.799910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.127 [2024-11-15 10:51:22.846890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:34.127 [2024-11-15 10:51:22.848263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.127 [2024-11-15 10:51:22.883027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:34.127 [2024-11-15 10:51:22.958564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.385 [2024-11-15 10:51:23.014442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.385 [2024-11-15 10:51:23.017674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:34.385 [2024-11-15 10:51:23.055126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:34.385 Running I/O for 1 seconds... 00:09:34.385 Running I/O for 1 seconds... 00:09:34.385 Running I/O for 1 seconds... 00:09:34.385 Running I/O for 1 seconds... 00:09:35.315 246712.00 IOPS, 963.72 MiB/s 00:09:35.315 Latency(us) 00:09:35.315 [2024-11-15T09:51:24.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.315 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:35.315 Nvme1n1 : 1.00 246329.08 962.22 0.00 0.00 517.53 242.20 2108.55 00:09:35.315 [2024-11-15T09:51:24.199Z] =================================================================================================================== 00:09:35.315 [2024-11-15T09:51:24.199Z] Total : 246329.08 962.22 0.00 0.00 517.53 242.20 2108.55 00:09:35.315 16696.00 IOPS, 65.22 MiB/s 00:09:35.315 Latency(us) 00:09:35.315 [2024-11-15T09:51:24.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.315 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:35.315 Nvme1n1 : 1.01 16729.90 65.35 0.00 0.00 7626.41 4587.52 13278.16 00:09:35.315 [2024-11-15T09:51:24.199Z] =================================================================================================================== 00:09:35.315 [2024-11-15T09:51:24.199Z] Total : 16729.90 65.35 0.00 0.00 7626.41 4587.52 13278.16 00:09:35.315 14314.00 IOPS, 55.91 MiB/s 00:09:35.315 Latency(us) 00:09:35.315 [2024-11-15T09:51:24.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.315 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:35.315 Nvme1n1 : 1.01 14365.59 56.12 0.00 0.00 8879.78 4872.46 16754.42 00:09:35.315 [2024-11-15T09:51:24.199Z] =================================================================================================================== 00:09:35.315 [2024-11-15T09:51:24.199Z] Total : 14365.59 56.12 0.00 0.00 8879.78 4872.46 16754.42 00:09:35.573 17306.00 IOPS, 67.60 MiB/s 00:09:35.573 Latency(us) 00:09:35.573 [2024-11-15T09:51:24.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.573 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:35.573 Nvme1n1 : 1.01 17403.74 67.98 0.00 0.00 7337.70 2820.90 17210.32 00:09:35.573 [2024-11-15T09:51:24.457Z] =================================================================================================================== 00:09:35.573 [2024-11-15T09:51:24.457Z] Total : 17403.74 67.98 0.00 0.00 7337.70 2820.90 17210.32 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1329315 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1329318 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1329322 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:35.573 rmmod nvme_rdma 00:09:35.573 rmmod nvme_fabrics 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1329129 ']' 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1329129 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 1329129 ']' 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 1329129 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:35.573 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1329129 00:09:35.831 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:35.832 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:35.832 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1329129' 00:09:35.832 killing process with pid 1329129 00:09:35.832 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 1329129 00:09:35.832 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 1329129 00:09:35.832 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:35.832 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:35.832 00:09:35.832 real 0m8.583s 00:09:35.832 user 0m16.833s 00:09:35.832 sys 0m5.491s 00:09:35.832 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:35.832 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:35.832 ************************************ 00:09:35.832 END TEST nvmf_bdev_io_wait 00:09:35.832 ************************************ 00:09:35.832 10:51:24 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:09:35.832 10:51:24 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:35.832 10:51:24 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:35.832 10:51:24 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.090 ************************************ 00:09:36.091 START TEST nvmf_queue_depth 00:09:36.091 ************************************ 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:09:36.091 * Looking for test storage... 00:09:36.091 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:36.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.091 --rc genhtml_branch_coverage=1 00:09:36.091 --rc genhtml_function_coverage=1 00:09:36.091 --rc genhtml_legend=1 00:09:36.091 --rc geninfo_all_blocks=1 00:09:36.091 --rc geninfo_unexecuted_blocks=1 00:09:36.091 00:09:36.091 ' 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:36.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.091 --rc genhtml_branch_coverage=1 00:09:36.091 --rc genhtml_function_coverage=1 00:09:36.091 --rc genhtml_legend=1 00:09:36.091 --rc geninfo_all_blocks=1 00:09:36.091 --rc geninfo_unexecuted_blocks=1 00:09:36.091 00:09:36.091 ' 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:36.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.091 --rc genhtml_branch_coverage=1 00:09:36.091 --rc genhtml_function_coverage=1 00:09:36.091 --rc genhtml_legend=1 00:09:36.091 --rc geninfo_all_blocks=1 00:09:36.091 --rc geninfo_unexecuted_blocks=1 00:09:36.091 00:09:36.091 ' 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:36.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.091 --rc genhtml_branch_coverage=1 00:09:36.091 --rc genhtml_function_coverage=1 00:09:36.091 --rc genhtml_legend=1 00:09:36.091 --rc geninfo_all_blocks=1 00:09:36.091 --rc geninfo_unexecuted_blocks=1 00:09:36.091 00:09:36.091 ' 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.091 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:36.092 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:36.092 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:36.092 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:36.092 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:36.092 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:36.092 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:36.092 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:36.092 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:36.092 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:36.092 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.092 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:36.092 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:36.092 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:36.092 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.092 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.092 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.092 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:36.092 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:36.092 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:36.092 10:51:24 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.359 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:41.359 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:41.359 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:41.359 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:41.359 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:41.359 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:41.359 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:41.359 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:41.359 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:09:41.360 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:09:41.360 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:09:41.360 Found net devices under 0000:af:00.0: mlx_0_0 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:09:41.360 Found net devices under 0000:af:00.1: mlx_0_1 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # rdma_device_init 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:41.360 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:41.361 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:41.361 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:09:41.361 altname enp175s0f0np0 00:09:41.361 altname ens801f0np0 00:09:41.361 inet 192.168.100.8/24 scope global mlx_0_0 00:09:41.361 valid_lft forever preferred_lft forever 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:41.361 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:41.361 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:09:41.361 altname enp175s0f1np1 00:09:41.361 altname ens801f1np1 00:09:41.361 inet 192.168.100.9/24 scope global mlx_0_1 00:09:41.361 valid_lft forever preferred_lft forever 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:41.361 192.168.100.9' 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:41.361 192.168.100.9' 00:09:41.361 10:51:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # head -n 1 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:41.361 192.168.100.9' 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # tail -n +2 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # head -n 1 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1332746 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1332746 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1332746 ']' 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:41.361 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.361 [2024-11-15 10:51:30.075297] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:09:41.361 [2024-11-15 10:51:30.075343] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.361 [2024-11-15 10:51:30.141130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.361 [2024-11-15 10:51:30.183870] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.361 [2024-11-15 10:51:30.183902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.361 [2024-11-15 10:51:30.183909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.361 [2024-11-15 10:51:30.183915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.361 [2024-11-15 10:51:30.183921] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.361 [2024-11-15 10:51:30.184506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.628 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:41.628 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:09:41.628 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:41.628 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:41.628 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.628 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.628 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:41.628 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.628 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.628 [2024-11-15 10:51:30.346901] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24972d0/0x249b7c0) succeed. 00:09:41.628 [2024-11-15 10:51:30.356595] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2498780/0x24dce60) succeed. 00:09:41.628 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.628 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:41.628 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.628 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.628 Malloc0 00:09:41.629 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.629 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:41.629 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.629 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.629 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.629 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:41.629 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.629 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.629 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.629 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:41.629 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.629 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.629 [2024-11-15 10:51:30.438428] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:41.629 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.629 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1332771 00:09:41.629 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:41.629 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1332771 /var/tmp/bdevperf.sock 00:09:41.629 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1332771 ']' 00:09:41.629 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:41.629 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:41.630 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:41.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:41.630 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:41.630 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:41.630 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.630 [2024-11-15 10:51:30.486976] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:09:41.630 [2024-11-15 10:51:30.487017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1332771 ] 00:09:41.892 [2024-11-15 10:51:30.548338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.892 [2024-11-15 10:51:30.591987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.892 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:41.892 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:09:41.892 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:41.892 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.893 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.893 NVMe0n1 00:09:41.893 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.893 10:51:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:42.151 Running I/O for 10 seconds... 00:09:44.023 16384.00 IOPS, 64.00 MiB/s [2024-11-15T09:51:34.284Z] 16528.00 IOPS, 64.56 MiB/s [2024-11-15T09:51:35.220Z] 16725.33 IOPS, 65.33 MiB/s [2024-11-15T09:51:36.154Z] 16779.50 IOPS, 65.54 MiB/s [2024-11-15T09:51:37.090Z] 16793.60 IOPS, 65.60 MiB/s [2024-11-15T09:51:38.026Z] 16870.33 IOPS, 65.90 MiB/s [2024-11-15T09:51:38.962Z] 16860.14 IOPS, 65.86 MiB/s [2024-11-15T09:51:40.337Z] 16896.00 IOPS, 66.00 MiB/s [2024-11-15T09:51:40.904Z] 16881.56 IOPS, 65.94 MiB/s [2024-11-15T09:51:41.163Z] 16896.00 IOPS, 66.00 MiB/s 00:09:52.279 Latency(us) 00:09:52.279 [2024-11-15T09:51:41.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.279 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:52.279 Verification LBA range: start 0x0 length 0x4000 00:09:52.279 NVMe0n1 : 10.05 16902.10 66.02 0.00 0.00 60421.15 22795.13 38295.82 00:09:52.279 [2024-11-15T09:51:41.163Z] =================================================================================================================== 00:09:52.279 [2024-11-15T09:51:41.163Z] Total : 16902.10 66.02 0.00 0.00 60421.15 22795.13 38295.82 00:09:52.279 { 00:09:52.279 "results": [ 00:09:52.279 { 00:09:52.279 "job": "NVMe0n1", 00:09:52.279 "core_mask": "0x1", 00:09:52.279 "workload": "verify", 00:09:52.279 "status": "finished", 00:09:52.279 "verify_range": { 00:09:52.279 "start": 0, 00:09:52.279 "length": 16384 00:09:52.279 }, 00:09:52.279 "queue_depth": 1024, 00:09:52.279 "io_size": 4096, 00:09:52.279 "runtime": 10.052836, 00:09:52.279 "iops": 16902.096085124635, 00:09:52.279 "mibps": 66.0238128325181, 00:09:52.279 "io_failed": 0, 00:09:52.279 "io_timeout": 0, 00:09:52.279 "avg_latency_us": 60421.154621381356, 00:09:52.279 "min_latency_us": 22795.130434782608, 00:09:52.279 "max_latency_us": 38295.819130434786 00:09:52.279 } 00:09:52.279 ], 00:09:52.279 "core_count": 1 00:09:52.279 } 00:09:52.279 10:51:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1332771 00:09:52.279 10:51:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1332771 ']' 00:09:52.279 10:51:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1332771 00:09:52.279 10:51:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:09:52.279 10:51:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:52.279 10:51:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1332771 00:09:52.279 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:52.279 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:52.279 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1332771' 00:09:52.279 killing process with pid 1332771 00:09:52.279 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1332771 00:09:52.279 Received shutdown signal, test time was about 10.000000 seconds 00:09:52.279 00:09:52.279 Latency(us) 00:09:52.279 [2024-11-15T09:51:41.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.279 [2024-11-15T09:51:41.163Z] =================================================================================================================== 00:09:52.279 [2024-11-15T09:51:41.163Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:52.279 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1332771 00:09:52.542 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:52.542 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:52.542 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:52.542 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:52.542 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:52.542 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:52.542 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:52.542 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.542 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:52.542 rmmod nvme_rdma 00:09:52.542 rmmod nvme_fabrics 00:09:52.542 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.542 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:52.542 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:52.542 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1332746 ']' 00:09:52.542 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1332746 00:09:52.542 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1332746 ']' 00:09:52.542 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1332746 00:09:52.542 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:09:52.542 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:52.542 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1332746 00:09:52.542 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:52.543 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:52.543 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1332746' 00:09:52.543 killing process with pid 1332746 00:09:52.543 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1332746 00:09:52.543 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1332746 00:09:52.849 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:52.849 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:52.849 00:09:52.849 real 0m16.752s 00:09:52.849 user 0m23.642s 00:09:52.849 sys 0m4.410s 00:09:52.849 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:52.849 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.849 ************************************ 00:09:52.849 END TEST nvmf_queue_depth 00:09:52.849 ************************************ 00:09:52.849 10:51:41 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:09:52.849 10:51:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:52.849 10:51:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:52.849 10:51:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:52.849 ************************************ 00:09:52.849 START TEST nvmf_target_multipath 00:09:52.849 ************************************ 00:09:52.849 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:09:52.849 * Looking for test storage... 00:09:52.849 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:52.849 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:52.849 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:52.849 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:09:52.849 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:52.849 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.849 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.849 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.849 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.849 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.849 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.849 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:52.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.850 --rc genhtml_branch_coverage=1 00:09:52.850 --rc genhtml_function_coverage=1 00:09:52.850 --rc genhtml_legend=1 00:09:52.850 --rc geninfo_all_blocks=1 00:09:52.850 --rc geninfo_unexecuted_blocks=1 00:09:52.850 00:09:52.850 ' 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:52.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.850 --rc genhtml_branch_coverage=1 00:09:52.850 --rc genhtml_function_coverage=1 00:09:52.850 --rc genhtml_legend=1 00:09:52.850 --rc geninfo_all_blocks=1 00:09:52.850 --rc geninfo_unexecuted_blocks=1 00:09:52.850 00:09:52.850 ' 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:52.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.850 --rc genhtml_branch_coverage=1 00:09:52.850 --rc genhtml_function_coverage=1 00:09:52.850 --rc genhtml_legend=1 00:09:52.850 --rc geninfo_all_blocks=1 00:09:52.850 --rc geninfo_unexecuted_blocks=1 00:09:52.850 00:09:52.850 ' 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:52.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.850 --rc genhtml_branch_coverage=1 00:09:52.850 --rc genhtml_function_coverage=1 00:09:52.850 --rc genhtml_legend=1 00:09:52.850 --rc geninfo_all_blocks=1 00:09:52.850 --rc geninfo_unexecuted_blocks=1 00:09:52.850 00:09:52.850 ' 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.850 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:53.146 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:53.146 10:51:41 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:09:58.425 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:09:58.425 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:09:58.425 Found net devices under 0000:af:00.0: mlx_0_0 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:09:58.425 Found net devices under 0000:af:00.1: mlx_0_1 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # rdma_device_init 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:58.425 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:58.426 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:58.426 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:09:58.426 altname enp175s0f0np0 00:09:58.426 altname ens801f0np0 00:09:58.426 inet 192.168.100.8/24 scope global mlx_0_0 00:09:58.426 valid_lft forever preferred_lft forever 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:58.426 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:58.426 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:09:58.426 altname enp175s0f1np1 00:09:58.426 altname ens801f1np1 00:09:58.426 inet 192.168.100.9/24 scope global mlx_0_1 00:09:58.426 valid_lft forever preferred_lft forever 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:58.426 192.168.100.9' 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:58.426 192.168.100.9' 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # head -n 1 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:58.426 192.168.100.9' 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # tail -n +2 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # head -n 1 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:09:58.426 run this test only with TCP transport for now 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.426 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:58.426 rmmod nvme_rdma 00:09:58.685 rmmod nvme_fabrics 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:58.685 00:09:58.685 real 0m5.789s 00:09:58.685 user 0m1.715s 00:09:58.685 sys 0m4.153s 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:58.685 ************************************ 00:09:58.685 END TEST nvmf_target_multipath 00:09:58.685 ************************************ 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:58.685 ************************************ 00:09:58.685 START TEST nvmf_zcopy 00:09:58.685 ************************************ 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:09:58.685 * Looking for test storage... 00:09:58.685 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:09:58.685 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:58.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.945 --rc genhtml_branch_coverage=1 00:09:58.945 --rc genhtml_function_coverage=1 00:09:58.945 --rc genhtml_legend=1 00:09:58.945 --rc geninfo_all_blocks=1 00:09:58.945 --rc geninfo_unexecuted_blocks=1 00:09:58.945 00:09:58.945 ' 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:58.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.945 --rc genhtml_branch_coverage=1 00:09:58.945 --rc genhtml_function_coverage=1 00:09:58.945 --rc genhtml_legend=1 00:09:58.945 --rc geninfo_all_blocks=1 00:09:58.945 --rc geninfo_unexecuted_blocks=1 00:09:58.945 00:09:58.945 ' 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:58.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.945 --rc genhtml_branch_coverage=1 00:09:58.945 --rc genhtml_function_coverage=1 00:09:58.945 --rc genhtml_legend=1 00:09:58.945 --rc geninfo_all_blocks=1 00:09:58.945 --rc geninfo_unexecuted_blocks=1 00:09:58.945 00:09:58.945 ' 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:58.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.945 --rc genhtml_branch_coverage=1 00:09:58.945 --rc genhtml_function_coverage=1 00:09:58.945 --rc genhtml_legend=1 00:09:58.945 --rc geninfo_all_blocks=1 00:09:58.945 --rc geninfo_unexecuted_blocks=1 00:09:58.945 00:09:58.945 ' 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:58.945 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:58.946 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:58.946 10:51:47 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:10:04.217 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:10:04.217 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.217 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:10:04.217 Found net devices under 0000:af:00.0: mlx_0_0 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:10:04.218 Found net devices under 0000:af:00.1: mlx_0_1 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # rdma_device_init 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:04.218 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:04.218 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:10:04.218 altname enp175s0f0np0 00:10:04.218 altname ens801f0np0 00:10:04.218 inet 192.168.100.8/24 scope global mlx_0_0 00:10:04.218 valid_lft forever preferred_lft forever 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:04.218 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:04.218 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:10:04.218 altname enp175s0f1np1 00:10:04.218 altname ens801f1np1 00:10:04.218 inet 192.168.100.9/24 scope global mlx_0_1 00:10:04.218 valid_lft forever preferred_lft forever 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:04.218 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:04.219 192.168.100.9' 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:04.219 192.168.100.9' 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # head -n 1 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:04.219 192.168.100.9' 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # head -n 1 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # tail -n +2 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1340844 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1340844 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 1340844 ']' 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.219 10:51:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:04.219 [2024-11-15 10:51:52.914083] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:10:04.219 [2024-11-15 10:51:52.914137] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.219 [2024-11-15 10:51:52.979694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.219 [2024-11-15 10:51:53.020562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.219 [2024-11-15 10:51:53.020599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.219 [2024-11-15 10:51:53.020606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.219 [2024-11-15 10:51:53.020611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.219 [2024-11-15 10:51:53.020617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.219 [2024-11-15 10:51:53.021228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:10:04.478 Unsupported transport: rdma 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@810 -- # type=--id 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@811 -- # id=0 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@822 -- # for n in $shm_files 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:04.478 nvmf_trace.0 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@825 -- # return 0 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:04.478 rmmod nvme_rdma 00:10:04.478 rmmod nvme_fabrics 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1340844 ']' 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1340844 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 1340844 ']' 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 1340844 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1340844 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1340844' 00:10:04.478 killing process with pid 1340844 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 1340844 00:10:04.478 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 1340844 00:10:04.737 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:04.737 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:04.737 00:10:04.737 real 0m6.012s 00:10:04.737 user 0m2.289s 00:10:04.737 sys 0m4.229s 00:10:04.737 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:04.737 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.737 ************************************ 00:10:04.737 END TEST nvmf_zcopy 00:10:04.737 ************************************ 00:10:04.737 10:51:53 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:10:04.737 10:51:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:04.737 10:51:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:04.737 10:51:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:04.737 ************************************ 00:10:04.737 START TEST nvmf_nmic 00:10:04.737 ************************************ 00:10:04.737 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:10:04.737 * Looking for test storage... 00:10:04.737 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:04.737 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:04.737 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:10:04.737 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:04.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.997 --rc genhtml_branch_coverage=1 00:10:04.997 --rc genhtml_function_coverage=1 00:10:04.997 --rc genhtml_legend=1 00:10:04.997 --rc geninfo_all_blocks=1 00:10:04.997 --rc geninfo_unexecuted_blocks=1 00:10:04.997 00:10:04.997 ' 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:04.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.997 --rc genhtml_branch_coverage=1 00:10:04.997 --rc genhtml_function_coverage=1 00:10:04.997 --rc genhtml_legend=1 00:10:04.997 --rc geninfo_all_blocks=1 00:10:04.997 --rc geninfo_unexecuted_blocks=1 00:10:04.997 00:10:04.997 ' 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:04.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.997 --rc genhtml_branch_coverage=1 00:10:04.997 --rc genhtml_function_coverage=1 00:10:04.997 --rc genhtml_legend=1 00:10:04.997 --rc geninfo_all_blocks=1 00:10:04.997 --rc geninfo_unexecuted_blocks=1 00:10:04.997 00:10:04.997 ' 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:04.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.997 --rc genhtml_branch_coverage=1 00:10:04.997 --rc genhtml_function_coverage=1 00:10:04.997 --rc genhtml_legend=1 00:10:04.997 --rc geninfo_all_blocks=1 00:10:04.997 --rc geninfo_unexecuted_blocks=1 00:10:04.997 00:10:04.997 ' 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.997 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.998 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:04.998 10:51:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.569 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.569 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:11.569 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:11.569 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:11.569 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:11.569 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:11.569 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:11.569 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:11.569 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:11.569 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:11.569 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:11.569 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:11.569 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:11.569 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:11.569 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:10:11.570 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:10:11.570 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:10:11.570 Found net devices under 0000:af:00.0: mlx_0_0 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:10:11.570 Found net devices under 0000:af:00.1: mlx_0_1 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # rdma_device_init 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:11.570 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:11.571 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:11.571 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:10:11.571 altname enp175s0f0np0 00:10:11.571 altname ens801f0np0 00:10:11.571 inet 192.168.100.8/24 scope global mlx_0_0 00:10:11.571 valid_lft forever preferred_lft forever 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:11.571 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:11.571 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:10:11.571 altname enp175s0f1np1 00:10:11.571 altname ens801f1np1 00:10:11.571 inet 192.168.100.9/24 scope global mlx_0_1 00:10:11.571 valid_lft forever preferred_lft forever 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:11.571 192.168.100.9' 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:11.571 192.168.100.9' 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # head -n 1 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:11.571 192.168.100.9' 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # head -n 1 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # tail -n +2 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1344197 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1344197 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 1344197 ']' 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:11.571 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.571 [2024-11-15 10:51:59.506596] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:10:11.571 [2024-11-15 10:51:59.506649] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.571 [2024-11-15 10:51:59.568987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:11.571 [2024-11-15 10:51:59.612193] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.571 [2024-11-15 10:51:59.612234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.571 [2024-11-15 10:51:59.612244] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.571 [2024-11-15 10:51:59.612250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.571 [2024-11-15 10:51:59.612255] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.571 [2024-11-15 10:51:59.613790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.571 [2024-11-15 10:51:59.613887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.572 [2024-11-15 10:51:59.613966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:11.572 [2024-11-15 10:51:59.613969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.572 [2024-11-15 10:51:59.780535] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7db230/0x7df720) succeed. 00:10:11.572 [2024-11-15 10:51:59.789927] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7dc8c0/0x820dc0) succeed. 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.572 Malloc0 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.572 [2024-11-15 10:51:59.970934] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:11.572 test case1: single bdev can't be used in multiple subsystems 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.572 [2024-11-15 10:51:59.994720] bdev.c:8468:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:11.572 [2024-11-15 10:51:59.994740] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:11.572 [2024-11-15 10:51:59.994748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.572 request: 00:10:11.572 { 00:10:11.572 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:11.572 "namespace": { 00:10:11.572 "bdev_name": "Malloc0", 00:10:11.572 "no_auto_visible": false, 00:10:11.572 "no_metadata": false 00:10:11.572 }, 00:10:11.572 "method": "nvmf_subsystem_add_ns", 00:10:11.572 "req_id": 1 00:10:11.572 } 00:10:11.572 Got JSON-RPC error response 00:10:11.572 response: 00:10:11.572 { 00:10:11.572 "code": -32602, 00:10:11.572 "message": "Invalid parameters" 00:10:11.572 } 00:10:11.572 10:51:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:11.572 10:52:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:11.572 10:52:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:11.572 10:52:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:11.572 Adding namespace failed - expected result. 00:10:11.572 10:52:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:11.572 test case2: host connect to nvmf target in multiple paths 00:10:11.572 10:52:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:10:11.572 10:52:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.572 10:52:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.572 [2024-11-15 10:52:00.006791] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:10:11.572 10:52:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.572 10:52:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:14.856 10:52:03 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:10:17.388 10:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:17.388 10:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:10:17.388 10:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:17.388 10:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:17.388 10:52:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:10:19.918 10:52:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:19.918 10:52:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:19.918 10:52:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:19.918 10:52:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:19.918 10:52:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:19.918 10:52:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:10:19.918 10:52:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:19.918 [global] 00:10:19.918 thread=1 00:10:19.918 invalidate=1 00:10:19.918 rw=write 00:10:19.918 time_based=1 00:10:19.918 runtime=1 00:10:19.918 ioengine=libaio 00:10:19.918 direct=1 00:10:19.918 bs=4096 00:10:19.918 iodepth=1 00:10:19.918 norandommap=0 00:10:19.918 numjobs=1 00:10:19.918 00:10:19.918 verify_dump=1 00:10:19.918 verify_backlog=512 00:10:19.918 verify_state_save=0 00:10:19.918 do_verify=1 00:10:19.918 verify=crc32c-intel 00:10:19.918 [job0] 00:10:19.918 filename=/dev/nvme0n1 00:10:19.918 Could not set queue depth (nvme0n1) 00:10:19.918 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:19.918 fio-3.35 00:10:19.918 Starting 1 thread 00:10:20.852 00:10:20.852 job0: (groupid=0, jobs=1): err= 0: pid=1345896: Fri Nov 15 10:52:09 2024 00:10:20.852 read: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec) 00:10:20.852 slat (nsec): min=6155, max=35387, avg=7058.27, stdev=872.24 00:10:20.852 clat (usec): min=47, max=163, avg=59.27, stdev= 4.00 00:10:20.852 lat (usec): min=56, max=170, avg=66.33, stdev= 4.09 00:10:20.852 clat percentiles (usec): 00:10:20.852 | 1.00th=[ 52], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 56], 00:10:20.852 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 61], 00:10:20.852 | 70.00th=[ 62], 80.00th=[ 63], 90.00th=[ 65], 95.00th=[ 67], 00:10:20.852 | 99.00th=[ 70], 99.50th=[ 71], 99.90th=[ 76], 99.95th=[ 79], 00:10:20.852 | 99.99th=[ 165] 00:10:20.852 write: IOPS=7423, BW=29.0MiB/s (30.4MB/s)(29.0MiB/1001msec); 0 zone resets 00:10:20.852 slat (nsec): min=8303, max=40101, avg=9140.62, stdev=840.92 00:10:20.852 clat (usec): min=45, max=104, avg=57.24, stdev= 3.97 00:10:20.852 lat (usec): min=57, max=139, avg=66.38, stdev= 4.08 00:10:20.852 clat percentiles (usec): 00:10:20.852 | 1.00th=[ 50], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 54], 00:10:20.852 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:10:20.852 | 70.00th=[ 60], 80.00th=[ 61], 90.00th=[ 63], 95.00th=[ 64], 00:10:20.852 | 99.00th=[ 68], 99.50th=[ 70], 99.90th=[ 77], 99.95th=[ 87], 00:10:20.852 | 99.99th=[ 104] 00:10:20.852 bw ( KiB/s): min=29784, max=29784, per=100.00%, avg=29784.00, stdev= 0.00, samples=1 00:10:20.852 iops : min= 7446, max= 7446, avg=7446.00, stdev= 0.00, samples=1 00:10:20.852 lat (usec) : 50=0.66%, 100=99.32%, 250=0.01% 00:10:20.852 cpu : usr=7.60%, sys=15.10%, ctx=14599, majf=0, minf=1 00:10:20.852 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.852 issued rwts: total=7168,7431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.852 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.852 00:10:20.852 Run status group 0 (all jobs): 00:10:20.852 READ: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:10:20.852 WRITE: bw=29.0MiB/s (30.4MB/s), 29.0MiB/s-29.0MiB/s (30.4MB/s-30.4MB/s), io=29.0MiB (30.4MB), run=1001-1001msec 00:10:20.852 00:10:20.852 Disk stats (read/write): 00:10:20.852 nvme0n1: ios=6509/6656, merge=0/0, ticks=346/343, in_queue=689, util=90.68% 00:10:20.852 10:52:09 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:26.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:26.120 rmmod nvme_rdma 00:10:26.120 rmmod nvme_fabrics 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1344197 ']' 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1344197 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 1344197 ']' 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 1344197 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1344197 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1344197' 00:10:26.120 killing process with pid 1344197 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 1344197 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 1344197 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:26.120 00:10:26.120 real 0m21.288s 00:10:26.120 user 1m7.133s 00:10:26.120 sys 0m5.358s 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.120 ************************************ 00:10:26.120 END TEST nvmf_nmic 00:10:26.120 ************************************ 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:26.120 ************************************ 00:10:26.120 START TEST nvmf_fio_target 00:10:26.120 ************************************ 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:10:26.120 * Looking for test storage... 00:10:26.120 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:10:26.120 10:52:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:26.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.379 --rc genhtml_branch_coverage=1 00:10:26.379 --rc genhtml_function_coverage=1 00:10:26.379 --rc genhtml_legend=1 00:10:26.379 --rc geninfo_all_blocks=1 00:10:26.379 --rc geninfo_unexecuted_blocks=1 00:10:26.379 00:10:26.379 ' 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:26.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.379 --rc genhtml_branch_coverage=1 00:10:26.379 --rc genhtml_function_coverage=1 00:10:26.379 --rc genhtml_legend=1 00:10:26.379 --rc geninfo_all_blocks=1 00:10:26.379 --rc geninfo_unexecuted_blocks=1 00:10:26.379 00:10:26.379 ' 00:10:26.379 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:26.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.380 --rc genhtml_branch_coverage=1 00:10:26.380 --rc genhtml_function_coverage=1 00:10:26.380 --rc genhtml_legend=1 00:10:26.380 --rc geninfo_all_blocks=1 00:10:26.380 --rc geninfo_unexecuted_blocks=1 00:10:26.380 00:10:26.380 ' 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:26.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.380 --rc genhtml_branch_coverage=1 00:10:26.380 --rc genhtml_function_coverage=1 00:10:26.380 --rc genhtml_legend=1 00:10:26.380 --rc geninfo_all_blocks=1 00:10:26.380 --rc geninfo_unexecuted_blocks=1 00:10:26.380 00:10:26.380 ' 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:26.380 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:26.380 10:52:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.641 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.641 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:31.641 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:31.641 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:31.641 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:31.641 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:31.641 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:31.641 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:31.641 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:31.641 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:31.641 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:31.641 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:31.641 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:31.641 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:31.641 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:31.641 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.641 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.641 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.641 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.641 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.641 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.641 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:10:31.642 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:10:31.642 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:10:31.642 Found net devices under 0000:af:00.0: mlx_0_0 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:10:31.642 Found net devices under 0000:af:00.1: mlx_0_1 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # rdma_device_init 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:31.642 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:31.642 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:10:31.642 altname enp175s0f0np0 00:10:31.642 altname ens801f0np0 00:10:31.642 inet 192.168.100.8/24 scope global mlx_0_0 00:10:31.642 valid_lft forever preferred_lft forever 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:31.642 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:31.642 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:10:31.642 altname enp175s0f1np1 00:10:31.642 altname ens801f1np1 00:10:31.642 inet 192.168.100.9/24 scope global mlx_0_1 00:10:31.642 valid_lft forever preferred_lft forever 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:31.642 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:31.643 192.168.100.9' 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:31.643 192.168.100.9' 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # head -n 1 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:31.643 192.168.100.9' 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # tail -n +2 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # head -n 1 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1350003 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1350003 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 1350003 ']' 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:31.643 10:52:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.643 [2024-11-15 10:52:19.888257] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:10:31.643 [2024-11-15 10:52:19.888302] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.643 [2024-11-15 10:52:19.949762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:31.643 [2024-11-15 10:52:19.993125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:31.643 [2024-11-15 10:52:19.993160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:31.643 [2024-11-15 10:52:19.993171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:31.643 [2024-11-15 10:52:19.993177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:31.643 [2024-11-15 10:52:19.993182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:31.643 [2024-11-15 10:52:19.994809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.643 [2024-11-15 10:52:19.994912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.643 [2024-11-15 10:52:19.995005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.643 [2024-11-15 10:52:19.995006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.643 10:52:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:31.643 10:52:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:10:31.643 10:52:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:31.643 10:52:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:31.643 10:52:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.643 10:52:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.643 10:52:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:31.643 [2024-11-15 10:52:20.325606] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1bec230/0x1bf0720) succeed. 00:10:31.643 [2024-11-15 10:52:20.335137] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1bed8c0/0x1c31dc0) succeed. 00:10:31.643 10:52:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:31.901 10:52:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:31.901 10:52:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:32.159 10:52:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:32.159 10:52:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:32.417 10:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:32.417 10:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:32.675 10:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:32.675 10:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:32.934 10:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:32.934 10:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:32.934 10:52:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:33.192 10:52:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:33.192 10:52:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:33.451 10:52:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:33.451 10:52:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:33.710 10:52:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:33.968 10:52:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:33.968 10:52:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:33.968 10:52:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:33.968 10:52:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:34.226 10:52:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:34.542 [2024-11-15 10:52:23.226235] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:34.542 10:52:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:34.801 10:52:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:34.801 10:52:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:38.086 10:52:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:38.086 10:52:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:10:38.086 10:52:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:38.086 10:52:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:10:38.086 10:52:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:10:38.086 10:52:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:10:39.989 10:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:39.989 10:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:39.989 10:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:39.989 10:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:10:39.989 10:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:39.989 10:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:10:39.989 10:52:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:39.989 [global] 00:10:39.989 thread=1 00:10:39.989 invalidate=1 00:10:39.989 rw=write 00:10:39.989 time_based=1 00:10:39.989 runtime=1 00:10:39.989 ioengine=libaio 00:10:39.989 direct=1 00:10:39.989 bs=4096 00:10:39.989 iodepth=1 00:10:39.989 norandommap=0 00:10:39.989 numjobs=1 00:10:39.989 00:10:39.989 verify_dump=1 00:10:39.989 verify_backlog=512 00:10:39.989 verify_state_save=0 00:10:39.989 do_verify=1 00:10:39.989 verify=crc32c-intel 00:10:39.989 [job0] 00:10:39.989 filename=/dev/nvme0n1 00:10:39.989 [job1] 00:10:39.989 filename=/dev/nvme0n2 00:10:39.989 [job2] 00:10:39.989 filename=/dev/nvme0n3 00:10:39.989 [job3] 00:10:39.989 filename=/dev/nvme0n4 00:10:39.989 Could not set queue depth (nvme0n1) 00:10:39.989 Could not set queue depth (nvme0n2) 00:10:39.989 Could not set queue depth (nvme0n3) 00:10:39.989 Could not set queue depth (nvme0n4) 00:10:40.247 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.247 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.247 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.247 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.247 fio-3.35 00:10:40.247 Starting 4 threads 00:10:41.622 00:10:41.622 job0: (groupid=0, jobs=1): err= 0: pid=1351675: Fri Nov 15 10:52:30 2024 00:10:41.622 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:10:41.622 slat (nsec): min=6226, max=27157, avg=7506.38, stdev=1170.01 00:10:41.622 clat (usec): min=67, max=217, avg=126.99, stdev=20.17 00:10:41.622 lat (usec): min=74, max=225, avg=134.49, stdev=20.35 00:10:41.622 clat percentiles (usec): 00:10:41.622 | 1.00th=[ 80], 5.00th=[ 88], 10.00th=[ 106], 20.00th=[ 116], 00:10:41.622 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 125], 60.00th=[ 128], 00:10:41.622 | 70.00th=[ 133], 80.00th=[ 143], 90.00th=[ 155], 95.00th=[ 163], 00:10:41.623 | 99.00th=[ 188], 99.50th=[ 196], 99.90th=[ 204], 99.95th=[ 210], 00:10:41.623 | 99.99th=[ 219] 00:10:41.623 write: IOPS=3787, BW=14.8MiB/s (15.5MB/s)(14.8MiB/1001msec); 0 zone resets 00:10:41.623 slat (nsec): min=7829, max=53297, avg=9673.83, stdev=2163.83 00:10:41.623 clat (usec): min=61, max=458, avg=123.02, stdev=24.49 00:10:41.623 lat (usec): min=70, max=487, avg=132.70, stdev=25.01 00:10:41.623 clat percentiles (usec): 00:10:41.623 | 1.00th=[ 75], 5.00th=[ 82], 10.00th=[ 95], 20.00th=[ 112], 00:10:41.623 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 121], 60.00th=[ 124], 00:10:41.623 | 70.00th=[ 128], 80.00th=[ 141], 90.00th=[ 155], 95.00th=[ 163], 00:10:41.623 | 99.00th=[ 190], 99.50th=[ 204], 99.90th=[ 293], 99.95th=[ 457], 00:10:41.623 | 99.99th=[ 457] 00:10:41.623 bw ( KiB/s): min=16384, max=16384, per=24.87%, avg=16384.00, stdev= 0.00, samples=1 00:10:41.623 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:41.623 lat (usec) : 100=9.34%, 250=90.56%, 500=0.09% 00:10:41.623 cpu : usr=4.60%, sys=7.80%, ctx=7376, majf=0, minf=1 00:10:41.623 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.623 issued rwts: total=3584,3791,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.623 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.623 job1: (groupid=0, jobs=1): err= 0: pid=1351676: Fri Nov 15 10:52:30 2024 00:10:41.623 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:10:41.623 slat (nsec): min=5838, max=18274, avg=7121.83, stdev=776.19 00:10:41.623 clat (usec): min=63, max=222, avg=129.16, stdev=20.41 00:10:41.623 lat (usec): min=70, max=229, avg=136.29, stdev=20.41 00:10:41.623 clat percentiles (usec): 00:10:41.623 | 1.00th=[ 80], 5.00th=[ 90], 10.00th=[ 109], 20.00th=[ 119], 00:10:41.623 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 130], 00:10:41.623 | 70.00th=[ 135], 80.00th=[ 147], 90.00th=[ 157], 95.00th=[ 163], 00:10:41.623 | 99.00th=[ 184], 99.50th=[ 198], 99.90th=[ 212], 99.95th=[ 221], 00:10:41.623 | 99.99th=[ 223] 00:10:41.623 write: IOPS=3968, BW=15.5MiB/s (16.3MB/s)(15.5MiB/1001msec); 0 zone resets 00:10:41.623 slat (nsec): min=7899, max=36301, avg=9121.72, stdev=1024.03 00:10:41.623 clat (usec): min=63, max=439, avg=116.08, stdev=23.87 00:10:41.623 lat (usec): min=72, max=448, avg=125.21, stdev=23.86 00:10:41.623 clat percentiles (usec): 00:10:41.623 | 1.00th=[ 73], 5.00th=[ 78], 10.00th=[ 81], 20.00th=[ 100], 00:10:41.623 | 30.00th=[ 112], 40.00th=[ 116], 50.00th=[ 118], 60.00th=[ 120], 00:10:41.623 | 70.00th=[ 123], 80.00th=[ 128], 90.00th=[ 147], 95.00th=[ 155], 00:10:41.623 | 99.00th=[ 172], 99.50th=[ 184], 99.90th=[ 289], 99.95th=[ 363], 00:10:41.623 | 99.99th=[ 441] 00:10:41.623 bw ( KiB/s): min=16384, max=16384, per=24.87%, avg=16384.00, stdev= 0.00, samples=1 00:10:41.623 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:41.623 lat (usec) : 100=13.88%, 250=86.04%, 500=0.08% 00:10:41.623 cpu : usr=3.50%, sys=9.00%, ctx=7556, majf=0, minf=1 00:10:41.623 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.623 issued rwts: total=3584,3972,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.623 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.623 job2: (groupid=0, jobs=1): err= 0: pid=1351677: Fri Nov 15 10:52:30 2024 00:10:41.623 read: IOPS=4254, BW=16.6MiB/s (17.4MB/s)(16.6MiB/1001msec) 00:10:41.623 slat (nsec): min=6222, max=27891, avg=7447.85, stdev=942.42 00:10:41.623 clat (usec): min=67, max=417, avg=104.22, stdev=21.74 00:10:41.623 lat (usec): min=79, max=424, avg=111.67, stdev=21.73 00:10:41.623 clat percentiles (usec): 00:10:41.623 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 85], 00:10:41.623 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 95], 60.00th=[ 116], 00:10:41.623 | 70.00th=[ 122], 80.00th=[ 126], 90.00th=[ 131], 95.00th=[ 135], 00:10:41.623 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 180], 99.95th=[ 210], 00:10:41.623 | 99.99th=[ 416] 00:10:41.623 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:10:41.623 slat (nsec): min=8118, max=38498, avg=9245.36, stdev=1003.98 00:10:41.623 clat (usec): min=67, max=236, avg=101.00, stdev=21.06 00:10:41.623 lat (usec): min=76, max=245, avg=110.24, stdev=21.07 00:10:41.623 clat percentiles (usec): 00:10:41.623 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 82], 00:10:41.623 | 30.00th=[ 84], 40.00th=[ 87], 50.00th=[ 92], 60.00th=[ 113], 00:10:41.623 | 70.00th=[ 118], 80.00th=[ 122], 90.00th=[ 127], 95.00th=[ 133], 00:10:41.623 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 176], 99.95th=[ 200], 00:10:41.623 | 99.99th=[ 237] 00:10:41.623 bw ( KiB/s): min=16384, max=16384, per=24.87%, avg=16384.00, stdev= 0.00, samples=1 00:10:41.623 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:41.623 lat (usec) : 100=54.96%, 250=45.02%, 500=0.02% 00:10:41.623 cpu : usr=4.30%, sys=10.50%, ctx=8867, majf=0, minf=1 00:10:41.623 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.623 issued rwts: total=4259,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.623 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.623 job3: (groupid=0, jobs=1): err= 0: pid=1351678: Fri Nov 15 10:52:30 2024 00:10:41.623 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:10:41.623 slat (nsec): min=6305, max=28787, avg=7405.85, stdev=917.55 00:10:41.623 clat (usec): min=72, max=308, avg=116.40, stdev=26.15 00:10:41.623 lat (usec): min=80, max=315, avg=123.80, stdev=26.05 00:10:41.623 clat percentiles (usec): 00:10:41.623 | 1.00th=[ 79], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 88], 00:10:41.623 | 30.00th=[ 92], 40.00th=[ 116], 50.00th=[ 123], 60.00th=[ 126], 00:10:41.623 | 70.00th=[ 129], 80.00th=[ 135], 90.00th=[ 153], 95.00th=[ 159], 00:10:41.623 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 208], 99.95th=[ 215], 00:10:41.623 | 99.99th=[ 310] 00:10:41.623 write: IOPS=4111, BW=16.1MiB/s (16.8MB/s)(16.1MiB/1001msec); 0 zone resets 00:10:41.623 slat (nsec): min=8267, max=36413, avg=9229.46, stdev=1045.75 00:10:41.623 clat (usec): min=70, max=409, avg=106.72, stdev=23.99 00:10:41.623 lat (usec): min=79, max=418, avg=115.95, stdev=24.05 00:10:41.623 clat percentiles (usec): 00:10:41.623 | 1.00th=[ 75], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 84], 00:10:41.623 | 30.00th=[ 87], 40.00th=[ 93], 50.00th=[ 112], 60.00th=[ 117], 00:10:41.623 | 70.00th=[ 120], 80.00th=[ 123], 90.00th=[ 137], 95.00th=[ 149], 00:10:41.623 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 204], 99.95th=[ 221], 00:10:41.623 | 99.99th=[ 408] 00:10:41.623 bw ( KiB/s): min=16384, max=16384, per=24.87%, avg=16384.00, stdev= 0.00, samples=1 00:10:41.623 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:41.623 lat (usec) : 100=40.70%, 250=59.27%, 500=0.04% 00:10:41.623 cpu : usr=4.30%, sys=9.40%, ctx=8212, majf=0, minf=1 00:10:41.623 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.623 issued rwts: total=4096,4116,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.623 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.623 00:10:41.623 Run status group 0 (all jobs): 00:10:41.623 READ: bw=60.6MiB/s (63.5MB/s), 14.0MiB/s-16.6MiB/s (14.7MB/s-17.4MB/s), io=60.6MiB (63.6MB), run=1001-1001msec 00:10:41.623 WRITE: bw=64.3MiB/s (67.5MB/s), 14.8MiB/s-18.0MiB/s (15.5MB/s-18.9MB/s), io=64.4MiB (67.5MB), run=1001-1001msec 00:10:41.623 00:10:41.623 Disk stats (read/write): 00:10:41.623 nvme0n1: ios=3014/3072, merge=0/0, ticks=372/360, in_queue=732, util=84.27% 00:10:41.623 nvme0n2: ios=3072/3144, merge=0/0, ticks=383/339, in_queue=722, util=84.88% 00:10:41.623 nvme0n3: ios=3525/3584, merge=0/0, ticks=359/350, in_queue=709, util=88.31% 00:10:41.623 nvme0n4: ios=3072/3366, merge=0/0, ticks=372/359, in_queue=731, util=89.45% 00:10:41.623 10:52:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:41.623 [global] 00:10:41.623 thread=1 00:10:41.623 invalidate=1 00:10:41.623 rw=randwrite 00:10:41.623 time_based=1 00:10:41.623 runtime=1 00:10:41.623 ioengine=libaio 00:10:41.623 direct=1 00:10:41.623 bs=4096 00:10:41.623 iodepth=1 00:10:41.623 norandommap=0 00:10:41.623 numjobs=1 00:10:41.623 00:10:41.623 verify_dump=1 00:10:41.623 verify_backlog=512 00:10:41.623 verify_state_save=0 00:10:41.623 do_verify=1 00:10:41.623 verify=crc32c-intel 00:10:41.623 [job0] 00:10:41.623 filename=/dev/nvme0n1 00:10:41.623 [job1] 00:10:41.623 filename=/dev/nvme0n2 00:10:41.623 [job2] 00:10:41.623 filename=/dev/nvme0n3 00:10:41.623 [job3] 00:10:41.623 filename=/dev/nvme0n4 00:10:41.623 Could not set queue depth (nvme0n1) 00:10:41.623 Could not set queue depth (nvme0n2) 00:10:41.623 Could not set queue depth (nvme0n3) 00:10:41.623 Could not set queue depth (nvme0n4) 00:10:41.882 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.882 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.882 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.882 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.882 fio-3.35 00:10:41.882 Starting 4 threads 00:10:43.261 00:10:43.261 job0: (groupid=0, jobs=1): err= 0: pid=1352062: Fri Nov 15 10:52:31 2024 00:10:43.261 read: IOPS=3096, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1001msec) 00:10:43.261 slat (nsec): min=6318, max=33386, avg=8531.16, stdev=2489.62 00:10:43.261 clat (usec): min=64, max=235, avg=141.22, stdev=23.71 00:10:43.261 lat (usec): min=77, max=242, avg=149.75, stdev=23.43 00:10:43.261 clat percentiles (usec): 00:10:43.261 | 1.00th=[ 86], 5.00th=[ 111], 10.00th=[ 116], 20.00th=[ 121], 00:10:43.261 | 30.00th=[ 125], 40.00th=[ 129], 50.00th=[ 141], 60.00th=[ 153], 00:10:43.261 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 174], 00:10:43.261 | 99.00th=[ 208], 99.50th=[ 221], 99.90th=[ 231], 99.95th=[ 233], 00:10:43.261 | 99.99th=[ 235] 00:10:43.261 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:10:43.261 slat (nsec): min=8339, max=40290, avg=10839.46, stdev=2751.80 00:10:43.261 clat (usec): min=55, max=378, avg=134.06, stdev=24.43 00:10:43.261 lat (usec): min=72, max=392, avg=144.90, stdev=24.38 00:10:43.261 clat percentiles (usec): 00:10:43.261 | 1.00th=[ 78], 5.00th=[ 101], 10.00th=[ 109], 20.00th=[ 114], 00:10:43.261 | 30.00th=[ 118], 40.00th=[ 123], 50.00th=[ 137], 60.00th=[ 145], 00:10:43.261 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 174], 00:10:43.261 | 99.00th=[ 200], 99.50th=[ 206], 99.90th=[ 219], 99.95th=[ 225], 00:10:43.261 | 99.99th=[ 379] 00:10:43.261 bw ( KiB/s): min=12288, max=12288, per=19.37%, avg=12288.00, stdev= 0.00, samples=1 00:10:43.261 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:43.261 lat (usec) : 100=3.55%, 250=96.44%, 500=0.01% 00:10:43.261 cpu : usr=3.60%, sys=8.50%, ctx=6685, majf=0, minf=1 00:10:43.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:43.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.261 issued rwts: total=3100,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:43.261 job1: (groupid=0, jobs=1): err= 0: pid=1352070: Fri Nov 15 10:52:31 2024 00:10:43.261 read: IOPS=4865, BW=19.0MiB/s (19.9MB/s)(19.0MiB/1001msec) 00:10:43.261 slat (nsec): min=6002, max=26503, avg=7115.63, stdev=779.50 00:10:43.261 clat (usec): min=54, max=187, avg=92.77, stdev=25.48 00:10:43.261 lat (usec): min=61, max=193, avg=99.88, stdev=25.49 00:10:43.261 clat percentiles (usec): 00:10:43.261 | 1.00th=[ 67], 5.00th=[ 70], 10.00th=[ 71], 20.00th=[ 74], 00:10:43.261 | 30.00th=[ 76], 40.00th=[ 78], 50.00th=[ 80], 60.00th=[ 83], 00:10:43.261 | 70.00th=[ 117], 80.00th=[ 125], 90.00th=[ 133], 95.00th=[ 137], 00:10:43.261 | 99.00th=[ 155], 99.50th=[ 167], 99.90th=[ 178], 99.95th=[ 182], 00:10:43.261 | 99.99th=[ 188] 00:10:43.261 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:10:43.261 slat (nsec): min=7865, max=36958, avg=8784.32, stdev=956.80 00:10:43.261 clat (usec): min=55, max=191, avg=87.90, stdev=24.13 00:10:43.261 lat (usec): min=63, max=200, avg=96.68, stdev=24.15 00:10:43.261 clat percentiles (usec): 00:10:43.261 | 1.00th=[ 63], 5.00th=[ 66], 10.00th=[ 68], 20.00th=[ 71], 00:10:43.261 | 30.00th=[ 73], 40.00th=[ 74], 50.00th=[ 76], 60.00th=[ 79], 00:10:43.261 | 70.00th=[ 87], 80.00th=[ 120], 90.00th=[ 126], 95.00th=[ 130], 00:10:43.261 | 99.00th=[ 149], 99.50th=[ 159], 99.90th=[ 169], 99.95th=[ 178], 00:10:43.261 | 99.99th=[ 192] 00:10:43.261 bw ( KiB/s): min=24576, max=24576, per=38.75%, avg=24576.00, stdev= 0.00, samples=1 00:10:43.261 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:10:43.261 lat (usec) : 100=69.36%, 250=30.64% 00:10:43.261 cpu : usr=6.10%, sys=10.00%, ctx=9990, majf=0, minf=1 00:10:43.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:43.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.261 issued rwts: total=4870,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:43.261 job2: (groupid=0, jobs=1): err= 0: pid=1352080: Fri Nov 15 10:52:31 2024 00:10:43.261 read: IOPS=3174, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1001msec) 00:10:43.261 slat (nsec): min=6015, max=30300, avg=7269.01, stdev=863.98 00:10:43.261 clat (usec): min=75, max=223, avg=141.64, stdev=22.75 00:10:43.261 lat (usec): min=82, max=230, avg=148.90, stdev=22.65 00:10:43.261 clat percentiles (usec): 00:10:43.261 | 1.00th=[ 90], 5.00th=[ 112], 10.00th=[ 119], 20.00th=[ 123], 00:10:43.261 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 135], 60.00th=[ 153], 00:10:43.261 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 176], 00:10:43.261 | 99.00th=[ 200], 99.50th=[ 208], 99.90th=[ 221], 99.95th=[ 225], 00:10:43.261 | 99.99th=[ 225] 00:10:43.261 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:10:43.261 slat (nsec): min=8030, max=61616, avg=8983.99, stdev=1246.23 00:10:43.261 clat (usec): min=69, max=385, avg=134.50, stdev=23.31 00:10:43.261 lat (usec): min=77, max=393, avg=143.49, stdev=23.30 00:10:43.261 clat percentiles (usec): 00:10:43.261 | 1.00th=[ 84], 5.00th=[ 98], 10.00th=[ 111], 20.00th=[ 116], 00:10:43.261 | 30.00th=[ 119], 40.00th=[ 123], 50.00th=[ 137], 60.00th=[ 145], 00:10:43.261 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 161], 95.00th=[ 172], 00:10:43.261 | 99.00th=[ 194], 99.50th=[ 200], 99.90th=[ 210], 99.95th=[ 212], 00:10:43.261 | 99.99th=[ 388] 00:10:43.261 bw ( KiB/s): min=12288, max=12288, per=19.37%, avg=12288.00, stdev= 0.00, samples=1 00:10:43.261 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:43.261 lat (usec) : 100=4.26%, 250=95.73%, 500=0.01% 00:10:43.261 cpu : usr=4.00%, sys=7.30%, ctx=6764, majf=0, minf=1 00:10:43.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:43.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.261 issued rwts: total=3178,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:43.261 job3: (groupid=0, jobs=1): err= 0: pid=1352086: Fri Nov 15 10:52:31 2024 00:10:43.261 read: IOPS=3072, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:43.261 slat (nsec): min=6833, max=36686, avg=8561.04, stdev=2247.50 00:10:43.261 clat (usec): min=78, max=242, avg=141.68, stdev=23.72 00:10:43.261 lat (usec): min=86, max=250, avg=150.24, stdev=23.52 00:10:43.261 clat percentiles (usec): 00:10:43.261 | 1.00th=[ 93], 5.00th=[ 113], 10.00th=[ 117], 20.00th=[ 122], 00:10:43.261 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 137], 60.00th=[ 151], 00:10:43.261 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 178], 00:10:43.261 | 99.00th=[ 215], 99.50th=[ 219], 99.90th=[ 227], 99.95th=[ 229], 00:10:43.261 | 99.99th=[ 243] 00:10:43.261 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:10:43.261 slat (nsec): min=7949, max=35486, avg=10727.66, stdev=2684.44 00:10:43.261 clat (usec): min=75, max=420, avg=135.07, stdev=23.65 00:10:43.261 lat (usec): min=85, max=430, avg=145.80, stdev=23.37 00:10:43.261 clat percentiles (usec): 00:10:43.261 | 1.00th=[ 86], 5.00th=[ 103], 10.00th=[ 112], 20.00th=[ 117], 00:10:43.261 | 30.00th=[ 120], 40.00th=[ 124], 50.00th=[ 135], 60.00th=[ 143], 00:10:43.261 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 161], 95.00th=[ 180], 00:10:43.261 | 99.00th=[ 200], 99.50th=[ 204], 99.90th=[ 217], 99.95th=[ 223], 00:10:43.261 | 99.99th=[ 420] 00:10:43.261 bw ( KiB/s): min=12288, max=12288, per=19.37%, avg=12288.00, stdev= 0.00, samples=1 00:10:43.261 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:43.261 lat (usec) : 100=3.65%, 250=96.34%, 500=0.02% 00:10:43.261 cpu : usr=3.90%, sys=8.10%, ctx=6660, majf=0, minf=1 00:10:43.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:43.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.261 issued rwts: total=3076,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:43.261 00:10:43.261 Run status group 0 (all jobs): 00:10:43.261 READ: bw=55.5MiB/s (58.2MB/s), 12.0MiB/s-19.0MiB/s (12.6MB/s-19.9MB/s), io=55.6MiB (58.3MB), run=1001-1001msec 00:10:43.261 WRITE: bw=61.9MiB/s (64.9MB/s), 14.0MiB/s-20.0MiB/s (14.7MB/s-20.9MB/s), io=62.0MiB (65.0MB), run=1001-1001msec 00:10:43.261 00:10:43.261 Disk stats (read/write): 00:10:43.261 nvme0n1: ios=2610/3031, merge=0/0, ticks=370/393, in_queue=763, util=86.97% 00:10:43.261 nvme0n2: ios=4311/4608, merge=0/0, ticks=366/365, in_queue=731, util=86.93% 00:10:43.261 nvme0n3: ios=2598/3072, merge=0/0, ticks=355/403, in_queue=758, util=89.10% 00:10:43.261 nvme0n4: ios=2560/3029, merge=0/0, ticks=345/392, in_queue=737, util=89.84% 00:10:43.261 10:52:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:43.261 [global] 00:10:43.261 thread=1 00:10:43.261 invalidate=1 00:10:43.261 rw=write 00:10:43.261 time_based=1 00:10:43.261 runtime=1 00:10:43.261 ioengine=libaio 00:10:43.261 direct=1 00:10:43.261 bs=4096 00:10:43.261 iodepth=128 00:10:43.261 norandommap=0 00:10:43.261 numjobs=1 00:10:43.261 00:10:43.261 verify_dump=1 00:10:43.261 verify_backlog=512 00:10:43.261 verify_state_save=0 00:10:43.261 do_verify=1 00:10:43.261 verify=crc32c-intel 00:10:43.261 [job0] 00:10:43.261 filename=/dev/nvme0n1 00:10:43.261 [job1] 00:10:43.261 filename=/dev/nvme0n2 00:10:43.261 [job2] 00:10:43.261 filename=/dev/nvme0n3 00:10:43.261 [job3] 00:10:43.261 filename=/dev/nvme0n4 00:10:43.261 Could not set queue depth (nvme0n1) 00:10:43.261 Could not set queue depth (nvme0n2) 00:10:43.261 Could not set queue depth (nvme0n3) 00:10:43.261 Could not set queue depth (nvme0n4) 00:10:43.520 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.520 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.520 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.520 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.520 fio-3.35 00:10:43.520 Starting 4 threads 00:10:44.896 00:10:44.896 job0: (groupid=0, jobs=1): err= 0: pid=1352533: Fri Nov 15 10:52:33 2024 00:10:44.896 read: IOPS=2828, BW=11.0MiB/s (11.6MB/s)(11.1MiB/1005msec) 00:10:44.896 slat (nsec): min=1672, max=4614.2k, avg=169004.74, stdev=648879.14 00:10:44.896 clat (usec): min=4072, max=26340, avg=21451.35, stdev=1952.12 00:10:44.896 lat (usec): min=4090, max=26461, avg=21620.35, stdev=1924.21 00:10:44.896 clat percentiles (usec): 00:10:44.896 | 1.00th=[ 9503], 5.00th=[20317], 10.00th=[20841], 20.00th=[21103], 00:10:44.896 | 30.00th=[21103], 40.00th=[21365], 50.00th=[21627], 60.00th=[21890], 00:10:44.896 | 70.00th=[22152], 80.00th=[22414], 90.00th=[22676], 95.00th=[22938], 00:10:44.896 | 99.00th=[24773], 99.50th=[25560], 99.90th=[26346], 99.95th=[26346], 00:10:44.896 | 99.99th=[26346] 00:10:44.896 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:10:44.896 slat (usec): min=2, max=4690, avg=164.46, stdev=617.17 00:10:44.896 clat (usec): min=14373, max=26849, avg=21422.34, stdev=1057.67 00:10:44.896 lat (usec): min=14408, max=26856, avg=21586.80, stdev=1012.49 00:10:44.896 clat percentiles (usec): 00:10:44.896 | 1.00th=[17695], 5.00th=[20055], 10.00th=[20317], 20.00th=[20841], 00:10:44.896 | 30.00th=[20841], 40.00th=[21365], 50.00th=[21365], 60.00th=[21890], 00:10:44.896 | 70.00th=[22152], 80.00th=[22152], 90.00th=[22414], 95.00th=[22414], 00:10:44.896 | 99.00th=[24773], 99.50th=[25822], 99.90th=[26084], 99.95th=[26870], 00:10:44.896 | 99.99th=[26870] 00:10:44.896 bw ( KiB/s): min=12288, max=12288, per=14.71%, avg=12288.00, stdev= 0.00, samples=2 00:10:44.896 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:10:44.896 lat (msec) : 10=0.63%, 20=4.23%, 50=95.15% 00:10:44.896 cpu : usr=2.09%, sys=3.39%, ctx=657, majf=0, minf=1 00:10:44.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:44.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.896 issued rwts: total=2843,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.896 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.896 job1: (groupid=0, jobs=1): err= 0: pid=1352548: Fri Nov 15 10:52:33 2024 00:10:44.896 read: IOPS=3701, BW=14.5MiB/s (15.2MB/s)(14.5MiB/1005msec) 00:10:44.896 slat (nsec): min=1519, max=2410.1k, avg=129000.18, stdev=317360.44 00:10:44.896 clat (usec): min=4000, max=21182, avg=16572.25, stdev=1323.88 00:10:44.896 lat (usec): min=4005, max=21186, avg=16701.25, stdev=1334.54 00:10:44.896 clat percentiles (usec): 00:10:44.896 | 1.00th=[ 8029], 5.00th=[15926], 10.00th=[16057], 20.00th=[16188], 00:10:44.896 | 30.00th=[16581], 40.00th=[16712], 50.00th=[16712], 60.00th=[16909], 00:10:44.896 | 70.00th=[16909], 80.00th=[16909], 90.00th=[17171], 95.00th=[17433], 00:10:44.896 | 99.00th=[18220], 99.50th=[18482], 99.90th=[21103], 99.95th=[21103], 00:10:44.896 | 99.99th=[21103] 00:10:44.896 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:10:44.896 slat (usec): min=2, max=2495, avg=122.86, stdev=313.99 00:10:44.896 clat (usec): min=12097, max=18564, avg=15930.26, stdev=552.16 00:10:44.896 lat (usec): min=12104, max=18584, avg=16053.13, stdev=579.32 00:10:44.896 clat percentiles (usec): 00:10:44.896 | 1.00th=[14877], 5.00th=[15008], 10.00th=[15270], 20.00th=[15533], 00:10:44.896 | 30.00th=[15664], 40.00th=[15795], 50.00th=[15926], 60.00th=[16057], 00:10:44.896 | 70.00th=[16188], 80.00th=[16319], 90.00th=[16581], 95.00th=[16909], 00:10:44.896 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18220], 99.95th=[18482], 00:10:44.896 | 99.99th=[18482] 00:10:44.896 bw ( KiB/s): min=16384, max=16384, per=19.61%, avg=16384.00, stdev= 0.00, samples=2 00:10:44.896 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:10:44.897 lat (msec) : 10=0.58%, 20=99.36%, 50=0.06% 00:10:44.897 cpu : usr=1.99%, sys=4.68%, ctx=1398, majf=0, minf=1 00:10:44.897 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:44.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.897 issued rwts: total=3720,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.897 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.897 job2: (groupid=0, jobs=1): err= 0: pid=1352561: Fri Nov 15 10:52:33 2024 00:10:44.897 read: IOPS=9485, BW=37.1MiB/s (38.9MB/s)(37.2MiB/1003msec) 00:10:44.897 slat (nsec): min=1502, max=1340.4k, avg=51620.87, stdev=193390.99 00:10:44.897 clat (usec): min=1503, max=8404, avg=6753.05, stdev=411.19 00:10:44.897 lat (usec): min=1996, max=8410, avg=6804.67, stdev=374.54 00:10:44.897 clat percentiles (usec): 00:10:44.897 | 1.00th=[ 5604], 5.00th=[ 6194], 10.00th=[ 6521], 20.00th=[ 6652], 00:10:44.897 | 30.00th=[ 6718], 40.00th=[ 6783], 50.00th=[ 6783], 60.00th=[ 6849], 00:10:44.897 | 70.00th=[ 6915], 80.00th=[ 6980], 90.00th=[ 7046], 95.00th=[ 7111], 00:10:44.897 | 99.00th=[ 7177], 99.50th=[ 7308], 99.90th=[ 8356], 99.95th=[ 8356], 00:10:44.897 | 99.99th=[ 8455] 00:10:44.897 write: IOPS=9698, BW=37.9MiB/s (39.7MB/s)(38.0MiB/1003msec); 0 zone resets 00:10:44.897 slat (usec): min=2, max=1827, avg=48.76, stdev=179.04 00:10:44.897 clat (usec): min=4916, max=7680, avg=6446.97, stdev=256.82 00:10:44.897 lat (usec): min=4924, max=7805, avg=6495.73, stdev=196.67 00:10:44.897 clat percentiles (usec): 00:10:44.897 | 1.00th=[ 5473], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6325], 00:10:44.897 | 30.00th=[ 6390], 40.00th=[ 6456], 50.00th=[ 6456], 60.00th=[ 6521], 00:10:44.897 | 70.00th=[ 6587], 80.00th=[ 6652], 90.00th=[ 6718], 95.00th=[ 6783], 00:10:44.897 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 7373], 99.95th=[ 7373], 00:10:44.897 | 99.99th=[ 7701] 00:10:44.897 bw ( KiB/s): min=38520, max=39304, per=46.57%, avg=38912.00, stdev=554.37, samples=2 00:10:44.897 iops : min= 9630, max= 9826, avg=9728.00, stdev=138.59, samples=2 00:10:44.897 lat (msec) : 2=0.01%, 4=0.27%, 10=99.72% 00:10:44.897 cpu : usr=3.89%, sys=9.08%, ctx=1219, majf=0, minf=2 00:10:44.897 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:44.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.897 issued rwts: total=9514,9728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.897 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.897 job3: (groupid=0, jobs=1): err= 0: pid=1352566: Fri Nov 15 10:52:33 2024 00:10:44.897 read: IOPS=3671, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1005msec) 00:10:44.897 slat (nsec): min=1622, max=2427.9k, avg=130386.65, stdev=320910.26 00:10:44.897 clat (usec): min=4002, max=19240, avg=16662.95, stdev=1324.40 00:10:44.897 lat (usec): min=4763, max=19551, avg=16793.34, stdev=1336.15 00:10:44.897 clat percentiles (usec): 00:10:44.897 | 1.00th=[ 7963], 5.00th=[15926], 10.00th=[16057], 20.00th=[16450], 00:10:44.897 | 30.00th=[16712], 40.00th=[16712], 50.00th=[16909], 60.00th=[16909], 00:10:44.897 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17433], 95.00th=[17695], 00:10:44.897 | 99.00th=[18482], 99.50th=[18482], 99.90th=[19006], 99.95th=[19268], 00:10:44.897 | 99.99th=[19268] 00:10:44.897 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:10:44.897 slat (usec): min=2, max=2349, avg=122.38, stdev=310.98 00:10:44.897 clat (usec): min=10759, max=21171, avg=15992.79, stdev=626.06 00:10:44.897 lat (usec): min=10769, max=21180, avg=16115.16, stdev=651.13 00:10:44.897 clat percentiles (usec): 00:10:44.897 | 1.00th=[14746], 5.00th=[15139], 10.00th=[15401], 20.00th=[15533], 00:10:44.897 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16057], 60.00th=[16057], 00:10:44.897 | 70.00th=[16188], 80.00th=[16319], 90.00th=[16581], 95.00th=[16909], 00:10:44.897 | 99.00th=[17957], 99.50th=[17957], 99.90th=[18744], 99.95th=[21103], 00:10:44.897 | 99.99th=[21103] 00:10:44.897 bw ( KiB/s): min=16216, max=16384, per=19.51%, avg=16300.00, stdev=118.79, samples=2 00:10:44.897 iops : min= 4054, max= 4096, avg=4075.00, stdev=29.70, samples=2 00:10:44.897 lat (msec) : 10=0.51%, 20=99.43%, 50=0.05% 00:10:44.897 cpu : usr=2.89%, sys=4.08%, ctx=1359, majf=0, minf=1 00:10:44.897 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:44.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.897 issued rwts: total=3690,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.897 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.897 00:10:44.897 Run status group 0 (all jobs): 00:10:44.897 READ: bw=76.8MiB/s (80.6MB/s), 11.0MiB/s-37.1MiB/s (11.6MB/s-38.9MB/s), io=77.2MiB (81.0MB), run=1003-1005msec 00:10:44.897 WRITE: bw=81.6MiB/s (85.6MB/s), 11.9MiB/s-37.9MiB/s (12.5MB/s-39.7MB/s), io=82.0MiB (86.0MB), run=1003-1005msec 00:10:44.897 00:10:44.897 Disk stats (read/write): 00:10:44.897 nvme0n1: ios=2539/2560, merge=0/0, ticks=13347/13443, in_queue=26790, util=86.87% 00:10:44.897 nvme0n2: ios=3103/3584, merge=0/0, ticks=17033/18632, in_queue=35665, util=87.13% 00:10:44.897 nvme0n3: ios=8192/8279, merge=0/0, ticks=17922/16879, in_queue=34801, util=89.12% 00:10:44.897 nvme0n4: ios=3078/3584, merge=0/0, ticks=16975/18662, in_queue=35637, util=89.77% 00:10:44.897 10:52:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:44.897 [global] 00:10:44.897 thread=1 00:10:44.897 invalidate=1 00:10:44.897 rw=randwrite 00:10:44.897 time_based=1 00:10:44.897 runtime=1 00:10:44.897 ioengine=libaio 00:10:44.897 direct=1 00:10:44.897 bs=4096 00:10:44.897 iodepth=128 00:10:44.897 norandommap=0 00:10:44.897 numjobs=1 00:10:44.897 00:10:44.897 verify_dump=1 00:10:44.897 verify_backlog=512 00:10:44.897 verify_state_save=0 00:10:44.897 do_verify=1 00:10:44.897 verify=crc32c-intel 00:10:44.897 [job0] 00:10:44.897 filename=/dev/nvme0n1 00:10:44.897 [job1] 00:10:44.897 filename=/dev/nvme0n2 00:10:44.897 [job2] 00:10:44.897 filename=/dev/nvme0n3 00:10:44.897 [job3] 00:10:44.897 filename=/dev/nvme0n4 00:10:44.897 Could not set queue depth (nvme0n1) 00:10:44.897 Could not set queue depth (nvme0n2) 00:10:44.897 Could not set queue depth (nvme0n3) 00:10:44.897 Could not set queue depth (nvme0n4) 00:10:45.156 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:45.156 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:45.156 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:45.156 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:45.156 fio-3.35 00:10:45.156 Starting 4 threads 00:10:46.533 00:10:46.533 job0: (groupid=0, jobs=1): err= 0: pid=1352997: Fri Nov 15 10:52:35 2024 00:10:46.533 read: IOPS=5552, BW=21.7MiB/s (22.7MB/s)(21.8MiB/1003msec) 00:10:46.533 slat (nsec): min=1295, max=3779.4k, avg=87133.10, stdev=293241.38 00:10:46.533 clat (usec): min=2052, max=17019, avg=11419.16, stdev=4126.41 00:10:46.533 lat (usec): min=2933, max=17021, avg=11506.29, stdev=4147.86 00:10:46.533 clat percentiles (usec): 00:10:46.533 | 1.00th=[ 5866], 5.00th=[ 6521], 10.00th=[ 6652], 20.00th=[ 6849], 00:10:46.533 | 30.00th=[ 6980], 40.00th=[ 7242], 50.00th=[14222], 60.00th=[14877], 00:10:46.533 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15664], 95.00th=[15795], 00:10:46.533 | 99.00th=[15926], 99.50th=[16450], 99.90th=[16909], 99.95th=[16909], 00:10:46.533 | 99.99th=[16909] 00:10:46.533 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:10:46.533 slat (nsec): min=1932, max=2176.6k, avg=86242.15, stdev=283563.28 00:10:46.533 clat (usec): min=5344, max=23696, avg=11204.50, stdev=4589.14 00:10:46.533 lat (usec): min=5969, max=23707, avg=11290.74, stdev=4617.38 00:10:46.533 clat percentiles (usec): 00:10:46.533 | 1.00th=[ 5669], 5.00th=[ 6128], 10.00th=[ 6259], 20.00th=[ 6390], 00:10:46.533 | 30.00th=[ 6521], 40.00th=[ 6718], 50.00th=[13960], 60.00th=[14484], 00:10:46.533 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15139], 95.00th=[17695], 00:10:46.533 | 99.00th=[21627], 99.50th=[22152], 99.90th=[22676], 99.95th=[23462], 00:10:46.533 | 99.99th=[23725] 00:10:46.533 bw ( KiB/s): min=16384, max=28672, per=21.54%, avg=22528.00, stdev=8688.93, samples=2 00:10:46.533 iops : min= 4096, max= 7168, avg=5632.00, stdev=2172.23, samples=2 00:10:46.533 lat (msec) : 4=0.06%, 10=44.90%, 20=53.00%, 50=2.04% 00:10:46.533 cpu : usr=3.79%, sys=5.09%, ctx=1216, majf=0, minf=1 00:10:46.533 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:46.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.533 issued rwts: total=5569,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.533 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.533 job1: (groupid=0, jobs=1): err= 0: pid=1353010: Fri Nov 15 10:52:35 2024 00:10:46.533 read: IOPS=9197, BW=35.9MiB/s (37.7MB/s)(36.0MiB/1002msec) 00:10:46.533 slat (nsec): min=1408, max=1142.3k, avg=52754.56, stdev=186885.33 00:10:46.533 clat (usec): min=5459, max=8233, avg=6945.33, stdev=445.82 00:10:46.533 lat (usec): min=5466, max=8241, avg=6998.08, stdev=454.74 00:10:46.533 clat percentiles (usec): 00:10:46.533 | 1.00th=[ 5538], 5.00th=[ 6128], 10.00th=[ 6390], 20.00th=[ 6652], 00:10:46.533 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 6980], 60.00th=[ 7046], 00:10:46.533 | 70.00th=[ 7111], 80.00th=[ 7242], 90.00th=[ 7439], 95.00th=[ 7635], 00:10:46.533 | 99.00th=[ 7963], 99.50th=[ 8029], 99.90th=[ 8160], 99.95th=[ 8225], 00:10:46.533 | 99.99th=[ 8225] 00:10:46.533 write: IOPS=9529, BW=37.2MiB/s (39.0MB/s)(37.3MiB/1002msec); 0 zone resets 00:10:46.533 slat (nsec): min=1870, max=1396.4k, avg=49793.98, stdev=173559.74 00:10:46.533 clat (usec): min=548, max=7934, avg=6583.32, stdev=551.16 00:10:46.533 lat (usec): min=1276, max=7960, avg=6633.11, stdev=558.37 00:10:46.533 clat percentiles (usec): 00:10:46.533 | 1.00th=[ 5211], 5.00th=[ 5604], 10.00th=[ 6128], 20.00th=[ 6325], 00:10:46.533 | 30.00th=[ 6456], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6718], 00:10:46.533 | 70.00th=[ 6783], 80.00th=[ 6915], 90.00th=[ 7111], 95.00th=[ 7308], 00:10:46.533 | 99.00th=[ 7570], 99.50th=[ 7701], 99.90th=[ 7767], 99.95th=[ 7832], 00:10:46.533 | 99.99th=[ 7963] 00:10:46.533 bw ( KiB/s): min=37368, max=38008, per=36.04%, avg=37688.00, stdev=452.55, samples=2 00:10:46.533 iops : min= 9342, max= 9502, avg=9422.00, stdev=113.14, samples=2 00:10:46.533 lat (usec) : 750=0.01% 00:10:46.533 lat (msec) : 2=0.08%, 4=0.25%, 10=99.66% 00:10:46.533 cpu : usr=5.09%, sys=8.59%, ctx=1317, majf=0, minf=1 00:10:46.533 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:46.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.533 issued rwts: total=9216,9549,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.533 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.533 job2: (groupid=0, jobs=1): err= 0: pid=1353026: Fri Nov 15 10:52:35 2024 00:10:46.533 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:10:46.533 slat (nsec): min=1445, max=1767.6k, avg=78104.91, stdev=263712.26 00:10:46.533 clat (usec): min=6487, max=17458, avg=10161.53, stdev=3305.89 00:10:46.533 lat (usec): min=7467, max=17479, avg=10239.64, stdev=3322.62 00:10:46.533 clat percentiles (usec): 00:10:46.533 | 1.00th=[ 7177], 5.00th=[ 7701], 10.00th=[ 8029], 20.00th=[ 8160], 00:10:46.533 | 30.00th=[ 8291], 40.00th=[ 8356], 50.00th=[ 8455], 60.00th=[ 8586], 00:10:46.533 | 70.00th=[ 8717], 80.00th=[15139], 90.00th=[16450], 95.00th=[16712], 00:10:46.533 | 99.00th=[17171], 99.50th=[17171], 99.90th=[17433], 99.95th=[17433], 00:10:46.533 | 99.99th=[17433] 00:10:46.533 write: IOPS=6418, BW=25.1MiB/s (26.3MB/s)(25.1MiB/1002msec); 0 zone resets 00:10:46.533 slat (nsec): min=1980, max=2238.8k, avg=76463.22, stdev=256273.55 00:10:46.533 clat (usec): min=765, max=23647, avg=9942.13, stdev=4018.24 00:10:46.533 lat (usec): min=2099, max=24439, avg=10018.59, stdev=4041.58 00:10:46.533 clat percentiles (usec): 00:10:46.533 | 1.00th=[ 4817], 5.00th=[ 7242], 10.00th=[ 7439], 20.00th=[ 7635], 00:10:46.533 | 30.00th=[ 7767], 40.00th=[ 7898], 50.00th=[ 8029], 60.00th=[ 8094], 00:10:46.533 | 70.00th=[ 8291], 80.00th=[15270], 90.00th=[16319], 95.00th=[16712], 00:10:46.533 | 99.00th=[21627], 99.50th=[22152], 99.90th=[22676], 99.95th=[22938], 00:10:46.533 | 99.99th=[23725] 00:10:46.533 bw ( KiB/s): min=20480, max=29952, per=24.11%, avg=25216.00, stdev=6697.72, samples=2 00:10:46.533 iops : min= 5120, max= 7488, avg=6304.00, stdev=1674.43, samples=2 00:10:46.533 lat (usec) : 1000=0.01% 00:10:46.533 lat (msec) : 4=0.36%, 10=75.39%, 20=22.36%, 50=1.88% 00:10:46.533 cpu : usr=3.60%, sys=6.39%, ctx=1034, majf=0, minf=1 00:10:46.533 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:46.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.533 issued rwts: total=6144,6431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.533 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.533 job3: (groupid=0, jobs=1): err= 0: pid=1353031: Fri Nov 15 10:52:35 2024 00:10:46.533 read: IOPS=4426, BW=17.3MiB/s (18.1MB/s)(17.3MiB/1003msec) 00:10:46.533 slat (nsec): min=1367, max=2204.4k, avg=109167.16, stdev=344830.98 00:10:46.533 clat (usec): min=2055, max=17962, avg=14146.87, stdev=2951.33 00:10:46.533 lat (usec): min=2929, max=17982, avg=14256.04, stdev=2953.78 00:10:46.533 clat percentiles (usec): 00:10:46.533 | 1.00th=[ 7111], 5.00th=[ 8029], 10.00th=[ 8455], 20.00th=[13173], 00:10:46.533 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15270], 60.00th=[15401], 00:10:46.533 | 70.00th=[15664], 80.00th=[16057], 90.00th=[16581], 95.00th=[16909], 00:10:46.533 | 99.00th=[17171], 99.50th=[17171], 99.90th=[17433], 99.95th=[17433], 00:10:46.533 | 99.99th=[17957] 00:10:46.533 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:10:46.533 slat (nsec): min=1910, max=2450.2k, avg=105864.68, stdev=332174.07 00:10:46.533 clat (usec): min=2949, max=16934, avg=13919.30, stdev=2606.14 00:10:46.533 lat (usec): min=2961, max=16944, avg=14025.16, stdev=2609.99 00:10:46.533 clat percentiles (usec): 00:10:46.533 | 1.00th=[ 6456], 5.00th=[ 7767], 10.00th=[ 8225], 20.00th=[13698], 00:10:46.533 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14615], 60.00th=[14877], 00:10:46.533 | 70.00th=[15008], 80.00th=[15664], 90.00th=[16188], 95.00th=[16450], 00:10:46.533 | 99.00th=[16712], 99.50th=[16909], 99.90th=[16909], 99.95th=[16909], 00:10:46.533 | 99.99th=[16909] 00:10:46.534 bw ( KiB/s): min=16384, max=20480, per=17.63%, avg=18432.00, stdev=2896.31, samples=2 00:10:46.534 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:10:46.534 lat (msec) : 4=0.34%, 10=14.71%, 20=84.95% 00:10:46.534 cpu : usr=4.29%, sys=5.69%, ctx=1125, majf=0, minf=1 00:10:46.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:46.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.534 issued rwts: total=4440,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.534 00:10:46.534 Run status group 0 (all jobs): 00:10:46.534 READ: bw=98.8MiB/s (104MB/s), 17.3MiB/s-35.9MiB/s (18.1MB/s-37.7MB/s), io=99.1MiB (104MB), run=1002-1003msec 00:10:46.534 WRITE: bw=102MiB/s (107MB/s), 17.9MiB/s-37.2MiB/s (18.8MB/s-39.0MB/s), io=102MiB (107MB), run=1002-1003msec 00:10:46.534 00:10:46.534 Disk stats (read/write): 00:10:46.534 nvme0n1: ios=4888/5120, merge=0/0, ticks=14856/15552, in_queue=30408, util=86.97% 00:10:46.534 nvme0n2: ios=7923/8192, merge=0/0, ticks=13388/12814, in_queue=26202, util=87.12% 00:10:46.534 nvme0n3: ios=5120/5226, merge=0/0, ticks=16202/16367, in_queue=32569, util=89.02% 00:10:46.534 nvme0n4: ios=3704/4096, merge=0/0, ticks=12971/15394, in_queue=28365, util=89.67% 00:10:46.534 10:52:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:46.534 10:52:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1353128 00:10:46.534 10:52:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:46.534 10:52:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:46.534 [global] 00:10:46.534 thread=1 00:10:46.534 invalidate=1 00:10:46.534 rw=read 00:10:46.534 time_based=1 00:10:46.534 runtime=10 00:10:46.534 ioengine=libaio 00:10:46.534 direct=1 00:10:46.534 bs=4096 00:10:46.534 iodepth=1 00:10:46.534 norandommap=1 00:10:46.534 numjobs=1 00:10:46.534 00:10:46.534 [job0] 00:10:46.534 filename=/dev/nvme0n1 00:10:46.534 [job1] 00:10:46.534 filename=/dev/nvme0n2 00:10:46.534 [job2] 00:10:46.534 filename=/dev/nvme0n3 00:10:46.534 [job3] 00:10:46.534 filename=/dev/nvme0n4 00:10:46.534 Could not set queue depth (nvme0n1) 00:10:46.534 Could not set queue depth (nvme0n2) 00:10:46.534 Could not set queue depth (nvme0n3) 00:10:46.534 Could not set queue depth (nvme0n4) 00:10:46.534 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.534 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.534 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.534 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.534 fio-3.35 00:10:46.534 Starting 4 threads 00:10:49.822 10:52:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:49.822 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=88289280, buflen=4096 00:10:49.822 fio: pid=1353410, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:49.822 10:52:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:49.822 10:52:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:49.822 10:52:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:49.822 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=84430848, buflen=4096 00:10:49.822 fio: pid=1353409, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:49.822 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=29454336, buflen=4096 00:10:49.822 fio: pid=1353407, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:49.822 10:52:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:49.822 10:52:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:50.081 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=39002112, buflen=4096 00:10:50.081 fio: pid=1353408, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:50.081 10:52:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.081 10:52:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:50.081 00:10:50.081 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1353407: Fri Nov 15 10:52:38 2024 00:10:50.081 read: IOPS=7529, BW=29.4MiB/s (30.8MB/s)(92.1MiB/3131msec) 00:10:50.081 slat (usec): min=4, max=18262, avg= 9.61, stdev=188.31 00:10:50.081 clat (nsec): min=1844, max=288600, avg=121130.47, stdev=27471.48 00:10:50.081 lat (usec): min=58, max=18343, avg=130.74, stdev=190.20 00:10:50.081 clat percentiles (usec): 00:10:50.081 | 1.00th=[ 59], 5.00th=[ 78], 10.00th=[ 83], 20.00th=[ 103], 00:10:50.081 | 30.00th=[ 114], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 123], 00:10:50.081 | 70.00th=[ 128], 80.00th=[ 147], 90.00th=[ 159], 95.00th=[ 167], 00:10:50.081 | 99.00th=[ 190], 99.50th=[ 204], 99.90th=[ 219], 99.95th=[ 223], 00:10:50.081 | 99.99th=[ 235] 00:10:50.081 bw ( KiB/s): min=24616, max=31855, per=27.75%, avg=30265.17, stdev=2785.88, samples=6 00:10:50.081 iops : min= 6154, max= 7963, avg=7566.17, stdev=696.38, samples=6 00:10:50.081 lat (usec) : 2=0.01%, 100=18.87%, 250=81.12%, 500=0.01% 00:10:50.081 cpu : usr=2.40%, sys=8.18%, ctx=23581, majf=0, minf=1 00:10:50.081 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.081 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.081 issued rwts: total=23576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.081 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.081 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1353408: Fri Nov 15 10:52:38 2024 00:10:50.081 read: IOPS=7708, BW=30.1MiB/s (31.6MB/s)(101MiB/3361msec) 00:10:50.081 slat (usec): min=2, max=17014, avg=10.28, stdev=192.58 00:10:50.081 clat (usec): min=50, max=26971, avg=117.58, stdev=218.77 00:10:50.081 lat (usec): min=57, max=26979, avg=127.85, stdev=291.37 00:10:50.081 clat percentiles (usec): 00:10:50.081 | 1.00th=[ 55], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 85], 00:10:50.081 | 30.00th=[ 109], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 122], 00:10:50.081 | 70.00th=[ 126], 80.00th=[ 143], 90.00th=[ 159], 95.00th=[ 167], 00:10:50.081 | 99.00th=[ 194], 99.50th=[ 204], 99.90th=[ 217], 99.95th=[ 221], 00:10:50.081 | 99.99th=[ 231] 00:10:50.081 bw ( KiB/s): min=24440, max=31520, per=27.35%, avg=29833.83, stdev=2757.10, samples=6 00:10:50.081 iops : min= 6110, max= 7880, avg=7458.33, stdev=689.30, samples=6 00:10:50.081 lat (usec) : 100=26.23%, 250=73.76% 00:10:50.081 lat (msec) : 50=0.01% 00:10:50.081 cpu : usr=2.05%, sys=9.20%, ctx=25914, majf=0, minf=2 00:10:50.081 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.081 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.081 issued rwts: total=25907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.081 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.081 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1353409: Fri Nov 15 10:52:38 2024 00:10:50.081 read: IOPS=7013, BW=27.4MiB/s (28.7MB/s)(80.5MiB/2939msec) 00:10:50.081 slat (usec): min=2, max=7853, avg= 8.11, stdev=76.55 00:10:50.081 clat (usec): min=60, max=373, avg=132.14, stdev=27.35 00:10:50.081 lat (usec): min=65, max=7993, avg=140.25, stdev=81.50 00:10:50.081 clat percentiles (usec): 00:10:50.081 | 1.00th=[ 79], 5.00th=[ 86], 10.00th=[ 90], 20.00th=[ 119], 00:10:50.081 | 30.00th=[ 125], 40.00th=[ 128], 50.00th=[ 131], 60.00th=[ 135], 00:10:50.081 | 70.00th=[ 141], 80.00th=[ 159], 90.00th=[ 169], 95.00th=[ 176], 00:10:50.081 | 99.00th=[ 196], 99.50th=[ 204], 99.90th=[ 219], 99.95th=[ 227], 00:10:50.081 | 99.99th=[ 253] 00:10:50.081 bw ( KiB/s): min=24056, max=29480, per=25.65%, avg=27980.80, stdev=2214.84, samples=5 00:10:50.081 iops : min= 6014, max= 7370, avg=6995.20, stdev=553.71, samples=5 00:10:50.081 lat (usec) : 100=17.25%, 250=82.73%, 500=0.01% 00:10:50.081 cpu : usr=2.42%, sys=7.83%, ctx=20617, majf=0, minf=2 00:10:50.081 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.081 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.081 issued rwts: total=20614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.081 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.081 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1353410: Fri Nov 15 10:52:38 2024 00:10:50.081 read: IOPS=7948, BW=31.0MiB/s (32.6MB/s)(84.2MiB/2712msec) 00:10:50.081 slat (nsec): min=5674, max=56635, avg=7060.44, stdev=891.02 00:10:50.081 clat (usec): min=64, max=361, avg=117.42, stdev=28.21 00:10:50.081 lat (usec): min=71, max=374, avg=124.48, stdev=28.16 00:10:50.081 clat percentiles (usec): 00:10:50.081 | 1.00th=[ 76], 5.00th=[ 80], 10.00th=[ 83], 20.00th=[ 87], 00:10:50.081 | 30.00th=[ 91], 40.00th=[ 120], 50.00th=[ 126], 60.00th=[ 129], 00:10:50.081 | 70.00th=[ 133], 80.00th=[ 135], 90.00th=[ 159], 95.00th=[ 172], 00:10:50.081 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 196], 99.95th=[ 202], 00:10:50.081 | 99.99th=[ 281] 00:10:50.081 bw ( KiB/s): min=28792, max=38536, per=28.40%, avg=30976.00, stdev=4235.37, samples=5 00:10:50.081 iops : min= 7198, max= 9634, avg=7744.00, stdev=1058.84, samples=5 00:10:50.081 lat (usec) : 100=37.78%, 250=62.20%, 500=0.01% 00:10:50.081 cpu : usr=2.40%, sys=9.11%, ctx=21557, majf=0, minf=2 00:10:50.081 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.081 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.081 issued rwts: total=21556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.081 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.081 00:10:50.081 Run status group 0 (all jobs): 00:10:50.081 READ: bw=107MiB/s (112MB/s), 27.4MiB/s-31.0MiB/s (28.7MB/s-32.6MB/s), io=358MiB (375MB), run=2712-3361msec 00:10:50.081 00:10:50.081 Disk stats (read/write): 00:10:50.081 nvme0n1: ios=23575/0, merge=0/0, ticks=2681/0, in_queue=2681, util=94.30% 00:10:50.081 nvme0n2: ios=23184/0, merge=0/0, ticks=2665/0, in_queue=2665, util=94.03% 00:10:50.081 nvme0n3: ios=20098/0, merge=0/0, ticks=2545/0, in_queue=2545, util=96.09% 00:10:50.081 nvme0n4: ios=20560/0, merge=0/0, ticks=2270/0, in_queue=2270, util=96.46% 00:10:50.341 10:52:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.341 10:52:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:50.600 10:52:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.600 10:52:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:50.858 10:52:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.858 10:52:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:51.117 10:52:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:51.117 10:52:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:51.117 10:52:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:51.117 10:52:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1353128 00:10:51.117 10:52:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:51.117 10:52:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:53.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:53.649 nvmf hotplug test: fio failed as expected 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:53.649 rmmod nvme_rdma 00:10:53.649 rmmod nvme_fabrics 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:53.649 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:53.650 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1350003 ']' 00:10:53.650 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1350003 00:10:53.650 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 1350003 ']' 00:10:53.650 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 1350003 00:10:53.650 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:10:53.650 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:53.650 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1350003 00:10:53.908 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:53.908 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:53.908 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1350003' 00:10:53.908 killing process with pid 1350003 00:10:53.908 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 1350003 00:10:53.908 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 1350003 00:10:54.167 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:54.167 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:54.167 00:10:54.167 real 0m27.954s 00:10:54.167 user 2m4.620s 00:10:54.167 sys 0m7.916s 00:10:54.167 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:54.167 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.167 ************************************ 00:10:54.167 END TEST nvmf_fio_target 00:10:54.167 ************************************ 00:10:54.167 10:52:42 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:10:54.167 10:52:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:54.167 10:52:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:54.167 10:52:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:54.167 ************************************ 00:10:54.167 START TEST nvmf_bdevio 00:10:54.167 ************************************ 00:10:54.167 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:10:54.167 * Looking for test storage... 00:10:54.167 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:54.167 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:54.167 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:54.167 10:52:42 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:54.167 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:54.167 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.167 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.167 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.167 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.167 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.167 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.167 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.167 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.167 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:54.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.168 --rc genhtml_branch_coverage=1 00:10:54.168 --rc genhtml_function_coverage=1 00:10:54.168 --rc genhtml_legend=1 00:10:54.168 --rc geninfo_all_blocks=1 00:10:54.168 --rc geninfo_unexecuted_blocks=1 00:10:54.168 00:10:54.168 ' 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:54.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.168 --rc genhtml_branch_coverage=1 00:10:54.168 --rc genhtml_function_coverage=1 00:10:54.168 --rc genhtml_legend=1 00:10:54.168 --rc geninfo_all_blocks=1 00:10:54.168 --rc geninfo_unexecuted_blocks=1 00:10:54.168 00:10:54.168 ' 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:54.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.168 --rc genhtml_branch_coverage=1 00:10:54.168 --rc genhtml_function_coverage=1 00:10:54.168 --rc genhtml_legend=1 00:10:54.168 --rc geninfo_all_blocks=1 00:10:54.168 --rc geninfo_unexecuted_blocks=1 00:10:54.168 00:10:54.168 ' 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:54.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.168 --rc genhtml_branch_coverage=1 00:10:54.168 --rc genhtml_function_coverage=1 00:10:54.168 --rc genhtml_legend=1 00:10:54.168 --rc geninfo_all_blocks=1 00:10:54.168 --rc geninfo_unexecuted_blocks=1 00:10:54.168 00:10:54.168 ' 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.168 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:54.427 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.428 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.428 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.428 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:54.428 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:54.428 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:54.428 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:54.428 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:54.428 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:54.428 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:54.428 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:54.428 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:54.428 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.428 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:54.428 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:54.428 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:54.428 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.428 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.428 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.428 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:54.428 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:54.428 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:54.428 10:52:43 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.702 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:59.702 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:59.702 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:59.702 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:59.702 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:59.702 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:10:59.703 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:10:59.703 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:10:59.703 Found net devices under 0000:af:00.0: mlx_0_0 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:10:59.703 Found net devices under 0000:af:00.1: mlx_0_1 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # rdma_device_init 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:59.703 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:59.703 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:10:59.703 altname enp175s0f0np0 00:10:59.703 altname ens801f0np0 00:10:59.703 inet 192.168.100.8/24 scope global mlx_0_0 00:10:59.703 valid_lft forever preferred_lft forever 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:59.703 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:59.704 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:59.704 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:10:59.704 altname enp175s0f1np1 00:10:59.704 altname ens801f1np1 00:10:59.704 inet 192.168.100.9/24 scope global mlx_0_1 00:10:59.704 valid_lft forever preferred_lft forever 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:59.704 192.168.100.9' 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # head -n 1 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:59.704 192.168.100.9' 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # tail -n +2 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:59.704 192.168.100.9' 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # head -n 1 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1357720 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1357720 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 1357720 ']' 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:59.704 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.963 [2024-11-15 10:52:48.620414] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:10:59.963 [2024-11-15 10:52:48.620480] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.963 [2024-11-15 10:52:48.683237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:59.963 [2024-11-15 10:52:48.724723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:59.963 [2024-11-15 10:52:48.724761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:59.963 [2024-11-15 10:52:48.724767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:59.963 [2024-11-15 10:52:48.724774] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:59.963 [2024-11-15 10:52:48.724779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:59.963 [2024-11-15 10:52:48.726558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:59.963 [2024-11-15 10:52:48.726647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:59.963 [2024-11-15 10:52:48.726757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:59.963 [2024-11-15 10:52:48.726758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:59.963 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:59.963 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:10:59.963 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:59.963 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:59.963 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.226 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.226 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:00.226 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.226 10:52:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.226 [2024-11-15 10:52:48.893124] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22b2b30/0x22b7020) succeed. 00:11:00.226 [2024-11-15 10:52:48.902479] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22b41c0/0x22f86c0) succeed. 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.226 Malloc0 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.226 [2024-11-15 10:52:49.078913] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:00.226 { 00:11:00.226 "params": { 00:11:00.226 "name": "Nvme$subsystem", 00:11:00.226 "trtype": "$TEST_TRANSPORT", 00:11:00.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:00.226 "adrfam": "ipv4", 00:11:00.226 "trsvcid": "$NVMF_PORT", 00:11:00.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:00.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:00.226 "hdgst": ${hdgst:-false}, 00:11:00.226 "ddgst": ${ddgst:-false} 00:11:00.226 }, 00:11:00.226 "method": "bdev_nvme_attach_controller" 00:11:00.226 } 00:11:00.226 EOF 00:11:00.226 )") 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:00.226 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:00.226 "params": { 00:11:00.226 "name": "Nvme1", 00:11:00.226 "trtype": "rdma", 00:11:00.226 "traddr": "192.168.100.8", 00:11:00.226 "adrfam": "ipv4", 00:11:00.226 "trsvcid": "4420", 00:11:00.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:00.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:00.226 "hdgst": false, 00:11:00.226 "ddgst": false 00:11:00.226 }, 00:11:00.226 "method": "bdev_nvme_attach_controller" 00:11:00.226 }' 00:11:00.487 [2024-11-15 10:52:49.130463] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:11:00.487 [2024-11-15 10:52:49.130505] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1357849 ] 00:11:00.487 [2024-11-15 10:52:49.193936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:00.487 [2024-11-15 10:52:49.238015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.487 [2024-11-15 10:52:49.238111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.487 [2024-11-15 10:52:49.238111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.746 I/O targets: 00:11:00.746 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:00.746 00:11:00.746 00:11:00.746 CUnit - A unit testing framework for C - Version 2.1-3 00:11:00.746 http://cunit.sourceforge.net/ 00:11:00.746 00:11:00.746 00:11:00.746 Suite: bdevio tests on: Nvme1n1 00:11:00.746 Test: blockdev write read block ...passed 00:11:00.746 Test: blockdev write zeroes read block ...passed 00:11:00.746 Test: blockdev write zeroes read no split ...passed 00:11:00.746 Test: blockdev write zeroes read split ...passed 00:11:00.746 Test: blockdev write zeroes read split partial ...passed 00:11:00.746 Test: blockdev reset ...[2024-11-15 10:52:49.447651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:00.746 [2024-11-15 10:52:49.470558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:11:00.746 [2024-11-15 10:52:49.498119] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:00.746 passed 00:11:00.746 Test: blockdev write read 8 blocks ...passed 00:11:00.746 Test: blockdev write read size > 128k ...passed 00:11:00.746 Test: blockdev write read invalid size ...passed 00:11:00.746 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:00.746 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:00.746 Test: blockdev write read max offset ...passed 00:11:00.746 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:00.746 Test: blockdev writev readv 8 blocks ...passed 00:11:00.746 Test: blockdev writev readv 30 x 1block ...passed 00:11:00.746 Test: blockdev writev readv block ...passed 00:11:00.746 Test: blockdev writev readv size > 128k ...passed 00:11:00.746 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:00.746 Test: blockdev comparev and writev ...[2024-11-15 10:52:49.501072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:00.746 [2024-11-15 10:52:49.501097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:00.746 [2024-11-15 10:52:49.501108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:00.746 [2024-11-15 10:52:49.501116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:00.746 [2024-11-15 10:52:49.501296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:00.746 [2024-11-15 10:52:49.501306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:00.746 [2024-11-15 10:52:49.501314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:00.746 [2024-11-15 10:52:49.501321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:00.746 [2024-11-15 10:52:49.501489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:00.746 [2024-11-15 10:52:49.501498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:00.746 [2024-11-15 10:52:49.501506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:00.746 [2024-11-15 10:52:49.501513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:00.746 [2024-11-15 10:52:49.501676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:00.746 [2024-11-15 10:52:49.501685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:00.746 [2024-11-15 10:52:49.501693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:00.746 [2024-11-15 10:52:49.501700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:00.746 passed 00:11:00.746 Test: blockdev nvme passthru rw ...passed 00:11:00.746 Test: blockdev nvme passthru vendor specific ...[2024-11-15 10:52:49.501959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:00.746 [2024-11-15 10:52:49.501969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:00.746 [2024-11-15 10:52:49.502015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:00.746 [2024-11-15 10:52:49.502024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:00.746 [2024-11-15 10:52:49.502069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:00.746 [2024-11-15 10:52:49.502077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:00.746 [2024-11-15 10:52:49.502120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:00.746 [2024-11-15 10:52:49.502128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:00.746 passed 00:11:00.746 Test: blockdev nvme admin passthru ...passed 00:11:00.746 Test: blockdev copy ...passed 00:11:00.746 00:11:00.746 Run Summary: Type Total Ran Passed Failed Inactive 00:11:00.746 suites 1 1 n/a 0 0 00:11:00.746 tests 23 23 23 0 0 00:11:00.746 asserts 152 152 152 0 n/a 00:11:00.746 00:11:00.746 Elapsed time = 0.172 seconds 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:01.005 rmmod nvme_rdma 00:11:01.005 rmmod nvme_fabrics 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1357720 ']' 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1357720 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 1357720 ']' 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 1357720 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1357720 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1357720' 00:11:01.005 killing process with pid 1357720 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 1357720 00:11:01.005 10:52:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 1357720 00:11:01.264 10:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:01.264 10:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:01.264 00:11:01.264 real 0m7.168s 00:11:01.264 user 0m7.893s 00:11:01.264 sys 0m4.642s 00:11:01.264 10:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:01.264 10:52:50 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.264 ************************************ 00:11:01.264 END TEST nvmf_bdevio 00:11:01.264 ************************************ 00:11:01.264 10:52:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:01.264 00:11:01.264 real 3m57.544s 00:11:01.264 user 11m3.639s 00:11:01.264 sys 1m15.666s 00:11:01.264 10:52:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:01.264 10:52:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:01.264 ************************************ 00:11:01.264 END TEST nvmf_target_core 00:11:01.264 ************************************ 00:11:01.264 10:52:50 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:11:01.264 10:52:50 nvmf_rdma -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:01.264 10:52:50 nvmf_rdma -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:01.264 10:52:50 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:01.264 ************************************ 00:11:01.264 START TEST nvmf_target_extra 00:11:01.264 ************************************ 00:11:01.264 10:52:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:11:01.524 * Looking for test storage... 00:11:01.524 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:01.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.524 --rc genhtml_branch_coverage=1 00:11:01.524 --rc genhtml_function_coverage=1 00:11:01.524 --rc genhtml_legend=1 00:11:01.524 --rc geninfo_all_blocks=1 00:11:01.524 --rc geninfo_unexecuted_blocks=1 00:11:01.524 00:11:01.524 ' 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:01.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.524 --rc genhtml_branch_coverage=1 00:11:01.524 --rc genhtml_function_coverage=1 00:11:01.524 --rc genhtml_legend=1 00:11:01.524 --rc geninfo_all_blocks=1 00:11:01.524 --rc geninfo_unexecuted_blocks=1 00:11:01.524 00:11:01.524 ' 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:01.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.524 --rc genhtml_branch_coverage=1 00:11:01.524 --rc genhtml_function_coverage=1 00:11:01.524 --rc genhtml_legend=1 00:11:01.524 --rc geninfo_all_blocks=1 00:11:01.524 --rc geninfo_unexecuted_blocks=1 00:11:01.524 00:11:01.524 ' 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:01.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.524 --rc genhtml_branch_coverage=1 00:11:01.524 --rc genhtml_function_coverage=1 00:11:01.524 --rc genhtml_legend=1 00:11:01.524 --rc geninfo_all_blocks=1 00:11:01.524 --rc geninfo_unexecuted_blocks=1 00:11:01.524 00:11:01.524 ' 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.524 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:01.525 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:01.525 ************************************ 00:11:01.525 START TEST nvmf_example 00:11:01.525 ************************************ 00:11:01.525 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:11:01.784 * Looking for test storage... 00:11:01.784 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:01.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.784 --rc genhtml_branch_coverage=1 00:11:01.784 --rc genhtml_function_coverage=1 00:11:01.784 --rc genhtml_legend=1 00:11:01.784 --rc geninfo_all_blocks=1 00:11:01.784 --rc geninfo_unexecuted_blocks=1 00:11:01.784 00:11:01.784 ' 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:01.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.784 --rc genhtml_branch_coverage=1 00:11:01.784 --rc genhtml_function_coverage=1 00:11:01.784 --rc genhtml_legend=1 00:11:01.784 --rc geninfo_all_blocks=1 00:11:01.784 --rc geninfo_unexecuted_blocks=1 00:11:01.784 00:11:01.784 ' 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:01.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.784 --rc genhtml_branch_coverage=1 00:11:01.784 --rc genhtml_function_coverage=1 00:11:01.784 --rc genhtml_legend=1 00:11:01.784 --rc geninfo_all_blocks=1 00:11:01.784 --rc geninfo_unexecuted_blocks=1 00:11:01.784 00:11:01.784 ' 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:01.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.784 --rc genhtml_branch_coverage=1 00:11:01.784 --rc genhtml_function_coverage=1 00:11:01.784 --rc genhtml_legend=1 00:11:01.784 --rc geninfo_all_blocks=1 00:11:01.784 --rc geninfo_unexecuted_blocks=1 00:11:01.784 00:11:01.784 ' 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:01.784 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:01.785 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:01.785 10:52:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:11:07.053 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:11:07.053 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:11:07.053 Found net devices under 0000:af:00.0: mlx_0_0 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:07.053 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.054 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:11:07.054 Found net devices under 0000:af:00.1: mlx_0_1 00:11:07.054 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.054 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:07.054 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:07.054 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:07.054 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:07.054 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:07.054 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # rdma_device_init 00:11:07.054 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:07.054 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:11:07.054 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:07.054 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:07.314 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:07.314 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:07.314 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:07.314 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:07.314 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:07.314 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:07.314 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:07.314 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:07.314 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:07.314 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:07.314 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:07.314 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:07.314 10:52:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:07.314 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:07.314 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:11:07.314 altname enp175s0f0np0 00:11:07.314 altname ens801f0np0 00:11:07.314 inet 192.168.100.8/24 scope global mlx_0_0 00:11:07.314 valid_lft forever preferred_lft forever 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:07.314 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:07.314 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:11:07.314 altname enp175s0f1np1 00:11:07.314 altname ens801f1np1 00:11:07.314 inet 192.168.100.9/24 scope global mlx_0_1 00:11:07.314 valid_lft forever preferred_lft forever 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:07.314 192.168.100.9' 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:07.314 192.168.100.9' 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # head -n 1 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:07.314 192.168.100.9' 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # tail -n +2 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # head -n 1 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:07.314 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:07.315 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:07.315 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:07.315 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:07.315 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:07.315 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:07.315 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:11:07.315 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1361338 00:11:07.315 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:07.315 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:07.315 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1361338 00:11:07.315 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 1361338 ']' 00:11:07.315 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.315 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:07.315 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.315 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:07.315 10:52:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.251 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:08.251 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:11:08.251 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:08.251 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:08.251 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.251 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:08.251 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.251 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.509 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.509 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:08.509 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.509 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.509 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.509 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:08.509 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:08.509 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.509 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.509 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.509 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:08.509 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:08.509 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.509 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.509 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.509 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:08.509 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.509 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.509 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.509 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:08.509 10:52:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:20.717 Initializing NVMe Controllers 00:11:20.717 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:20.717 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:20.717 Initialization complete. Launching workers. 00:11:20.717 ======================================================== 00:11:20.717 Latency(us) 00:11:20.717 Device Information : IOPS MiB/s Average min max 00:11:20.717 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 22478.80 87.81 2847.14 669.85 15983.86 00:11:20.717 ======================================================== 00:11:20.717 Total : 22478.80 87.81 2847.14 669.85 15983.86 00:11:20.717 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:20.717 rmmod nvme_rdma 00:11:20.717 rmmod nvme_fabrics 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1361338 ']' 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1361338 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 1361338 ']' 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 1361338 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1361338 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1361338' 00:11:20.717 killing process with pid 1361338 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 1361338 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 1361338 00:11:20.717 nvmf threads initialize successfully 00:11:20.717 bdev subsystem init successfully 00:11:20.717 created a nvmf target service 00:11:20.717 create targets's poll groups done 00:11:20.717 all subsystems of target started 00:11:20.717 nvmf target is running 00:11:20.717 all subsystems of target stopped 00:11:20.717 destroy targets's poll groups done 00:11:20.717 destroyed the nvmf target service 00:11:20.717 bdev subsystem finish successfully 00:11:20.717 nvmf threads destroy successfully 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:20.717 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:20.718 00:11:20.718 real 0m18.594s 00:11:20.718 user 0m52.273s 00:11:20.718 sys 0m4.606s 00:11:20.718 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:20.718 10:53:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:20.718 ************************************ 00:11:20.718 END TEST nvmf_example 00:11:20.718 ************************************ 00:11:20.718 10:53:08 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:11:20.718 10:53:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:20.718 10:53:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:20.718 10:53:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:20.718 ************************************ 00:11:20.718 START TEST nvmf_filesystem 00:11:20.718 ************************************ 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:11:20.718 * Looking for test storage... 00:11:20.718 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:20.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.718 --rc genhtml_branch_coverage=1 00:11:20.718 --rc genhtml_function_coverage=1 00:11:20.718 --rc genhtml_legend=1 00:11:20.718 --rc geninfo_all_blocks=1 00:11:20.718 --rc geninfo_unexecuted_blocks=1 00:11:20.718 00:11:20.718 ' 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:20.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.718 --rc genhtml_branch_coverage=1 00:11:20.718 --rc genhtml_function_coverage=1 00:11:20.718 --rc genhtml_legend=1 00:11:20.718 --rc geninfo_all_blocks=1 00:11:20.718 --rc geninfo_unexecuted_blocks=1 00:11:20.718 00:11:20.718 ' 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:20.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.718 --rc genhtml_branch_coverage=1 00:11:20.718 --rc genhtml_function_coverage=1 00:11:20.718 --rc genhtml_legend=1 00:11:20.718 --rc geninfo_all_blocks=1 00:11:20.718 --rc geninfo_unexecuted_blocks=1 00:11:20.718 00:11:20.718 ' 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:20.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.718 --rc genhtml_branch_coverage=1 00:11:20.718 --rc genhtml_function_coverage=1 00:11:20.718 --rc genhtml_legend=1 00:11:20.718 --rc geninfo_all_blocks=1 00:11:20.718 --rc geninfo_unexecuted_blocks=1 00:11:20.718 00:11:20.718 ' 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:20.718 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:11:20.719 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:20.719 #define SPDK_CONFIG_H 00:11:20.719 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:20.719 #define SPDK_CONFIG_APPS 1 00:11:20.719 #define SPDK_CONFIG_ARCH native 00:11:20.719 #undef SPDK_CONFIG_ASAN 00:11:20.719 #undef SPDK_CONFIG_AVAHI 00:11:20.719 #undef SPDK_CONFIG_CET 00:11:20.719 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:20.719 #define SPDK_CONFIG_COVERAGE 1 00:11:20.719 #define SPDK_CONFIG_CROSS_PREFIX 00:11:20.719 #undef SPDK_CONFIG_CRYPTO 00:11:20.719 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:20.719 #undef SPDK_CONFIG_CUSTOMOCF 00:11:20.719 #undef SPDK_CONFIG_DAOS 00:11:20.719 #define SPDK_CONFIG_DAOS_DIR 00:11:20.719 #define SPDK_CONFIG_DEBUG 1 00:11:20.719 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:20.719 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:11:20.719 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:20.719 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:20.719 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:20.719 #undef SPDK_CONFIG_DPDK_UADK 00:11:20.719 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:11:20.719 #define SPDK_CONFIG_EXAMPLES 1 00:11:20.719 #undef SPDK_CONFIG_FC 00:11:20.719 #define SPDK_CONFIG_FC_PATH 00:11:20.719 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:20.719 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:20.719 #define SPDK_CONFIG_FSDEV 1 00:11:20.719 #undef SPDK_CONFIG_FUSE 00:11:20.719 #undef SPDK_CONFIG_FUZZER 00:11:20.719 #define SPDK_CONFIG_FUZZER_LIB 00:11:20.719 #undef SPDK_CONFIG_GOLANG 00:11:20.719 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:20.719 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:20.719 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:20.719 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:20.719 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:20.719 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:20.719 #undef SPDK_CONFIG_HAVE_LZ4 00:11:20.719 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:20.719 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:20.719 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:20.719 #define SPDK_CONFIG_IDXD 1 00:11:20.719 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:20.719 #undef SPDK_CONFIG_IPSEC_MB 00:11:20.719 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:20.719 #define SPDK_CONFIG_ISAL 1 00:11:20.719 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:20.719 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:20.719 #define SPDK_CONFIG_LIBDIR 00:11:20.719 #undef SPDK_CONFIG_LTO 00:11:20.719 #define SPDK_CONFIG_MAX_LCORES 128 00:11:20.719 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:20.719 #define SPDK_CONFIG_NVME_CUSE 1 00:11:20.719 #undef SPDK_CONFIG_OCF 00:11:20.719 #define SPDK_CONFIG_OCF_PATH 00:11:20.719 #define SPDK_CONFIG_OPENSSL_PATH 00:11:20.719 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:20.719 #define SPDK_CONFIG_PGO_DIR 00:11:20.720 #undef SPDK_CONFIG_PGO_USE 00:11:20.720 #define SPDK_CONFIG_PREFIX /usr/local 00:11:20.720 #undef SPDK_CONFIG_RAID5F 00:11:20.720 #undef SPDK_CONFIG_RBD 00:11:20.720 #define SPDK_CONFIG_RDMA 1 00:11:20.720 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:20.720 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:20.720 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:20.720 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:20.720 #define SPDK_CONFIG_SHARED 1 00:11:20.720 #undef SPDK_CONFIG_SMA 00:11:20.720 #define SPDK_CONFIG_TESTS 1 00:11:20.720 #undef SPDK_CONFIG_TSAN 00:11:20.720 #define SPDK_CONFIG_UBLK 1 00:11:20.720 #define SPDK_CONFIG_UBSAN 1 00:11:20.720 #undef SPDK_CONFIG_UNIT_TESTS 00:11:20.720 #undef SPDK_CONFIG_URING 00:11:20.720 #define SPDK_CONFIG_URING_PATH 00:11:20.720 #undef SPDK_CONFIG_URING_ZNS 00:11:20.720 #undef SPDK_CONFIG_USDT 00:11:20.720 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:20.720 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:20.720 #undef SPDK_CONFIG_VFIO_USER 00:11:20.720 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:20.720 #define SPDK_CONFIG_VHOST 1 00:11:20.720 #define SPDK_CONFIG_VIRTIO 1 00:11:20.720 #undef SPDK_CONFIG_VTUNE 00:11:20.720 #define SPDK_CONFIG_VTUNE_DIR 00:11:20.720 #define SPDK_CONFIG_WERROR 1 00:11:20.720 #define SPDK_CONFIG_WPDK_DIR 00:11:20.720 #undef SPDK_CONFIG_XNVME 00:11:20.720 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:20.720 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:20.721 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j96 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=rdma 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 1363542 ]] 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 1363542 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.L3KN2u 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.L3KN2u/tests/target /tmp/spdk.L3KN2u 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:20.722 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=185043300352 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=195963985920 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10920685568 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97968533504 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981992960 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=13459456 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=39169712128 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=39192797184 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23085056 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97981206528 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981992960 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=786432 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=19596386304 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=19596398592 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:20.723 * Looking for test storage... 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=185043300352 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=13135278080 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:20.723 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.723 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:20.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.724 --rc genhtml_branch_coverage=1 00:11:20.724 --rc genhtml_function_coverage=1 00:11:20.724 --rc genhtml_legend=1 00:11:20.724 --rc geninfo_all_blocks=1 00:11:20.724 --rc geninfo_unexecuted_blocks=1 00:11:20.724 00:11:20.724 ' 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:20.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.724 --rc genhtml_branch_coverage=1 00:11:20.724 --rc genhtml_function_coverage=1 00:11:20.724 --rc genhtml_legend=1 00:11:20.724 --rc geninfo_all_blocks=1 00:11:20.724 --rc geninfo_unexecuted_blocks=1 00:11:20.724 00:11:20.724 ' 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:20.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.724 --rc genhtml_branch_coverage=1 00:11:20.724 --rc genhtml_function_coverage=1 00:11:20.724 --rc genhtml_legend=1 00:11:20.724 --rc geninfo_all_blocks=1 00:11:20.724 --rc geninfo_unexecuted_blocks=1 00:11:20.724 00:11:20.724 ' 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:20.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.724 --rc genhtml_branch_coverage=1 00:11:20.724 --rc genhtml_function_coverage=1 00:11:20.724 --rc genhtml_legend=1 00:11:20.724 --rc geninfo_all_blocks=1 00:11:20.724 --rc geninfo_unexecuted_blocks=1 00:11:20.724 00:11:20.724 ' 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:20.724 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:20.724 10:53:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:25.998 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:11:25.999 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:11:25.999 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:11:25.999 Found net devices under 0000:af:00.0: mlx_0_0 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:11:25.999 Found net devices under 0000:af:00.1: mlx_0_1 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # rdma_device_init 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:11:25.999 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:26.000 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:26.000 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:11:26.000 altname enp175s0f0np0 00:11:26.000 altname ens801f0np0 00:11:26.000 inet 192.168.100.8/24 scope global mlx_0_0 00:11:26.000 valid_lft forever preferred_lft forever 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:26.000 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:26.259 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:26.259 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:11:26.259 altname enp175s0f1np1 00:11:26.259 altname ens801f1np1 00:11:26.259 inet 192.168.100.9/24 scope global mlx_0_1 00:11:26.259 valid_lft forever preferred_lft forever 00:11:26.259 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:26.259 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:26.259 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:26.259 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:26.259 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:26.259 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:26.259 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:26.259 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:26.259 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:26.259 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:26.260 192.168.100.9' 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:26.260 192.168.100.9' 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # head -n 1 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:26.260 192.168.100.9' 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # head -n 1 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # tail -n +2 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:26.260 10:53:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:26.260 ************************************ 00:11:26.260 START TEST nvmf_filesystem_no_in_capsule 00:11:26.260 ************************************ 00:11:26.260 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:11:26.260 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:26.260 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:26.260 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:26.260 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:26.260 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.260 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1366646 00:11:26.260 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1366646 00:11:26.260 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:26.260 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 1366646 ']' 00:11:26.260 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.260 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:26.260 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.260 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:26.260 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.260 [2024-11-15 10:53:15.075738] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:11:26.260 [2024-11-15 10:53:15.075786] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.260 [2024-11-15 10:53:15.138840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.519 [2024-11-15 10:53:15.182429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.519 [2024-11-15 10:53:15.182464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.519 [2024-11-15 10:53:15.182471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.519 [2024-11-15 10:53:15.182477] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.519 [2024-11-15 10:53:15.182482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.519 [2024-11-15 10:53:15.184191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.519 [2024-11-15 10:53:15.184270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.519 [2024-11-15 10:53:15.184382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.519 [2024-11-15 10:53:15.184384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.519 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:26.519 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:11:26.519 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:26.519 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:26.519 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.519 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.519 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:26.519 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:11:26.519 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.519 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.519 [2024-11-15 10:53:15.330612] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:11:26.519 [2024-11-15 10:53:15.351683] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xee9230/0xeed720) succeed. 00:11:26.519 [2024-11-15 10:53:15.361773] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xeea8c0/0xf2edc0) succeed. 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.851 Malloc1 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.851 [2024-11-15 10:53:15.623912] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.851 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:11:26.851 { 00:11:26.851 "name": "Malloc1", 00:11:26.851 "aliases": [ 00:11:26.851 "9234e683-bb0b-4002-8a54-119b7aeb2f19" 00:11:26.851 ], 00:11:26.851 "product_name": "Malloc disk", 00:11:26.851 "block_size": 512, 00:11:26.851 "num_blocks": 1048576, 00:11:26.851 "uuid": "9234e683-bb0b-4002-8a54-119b7aeb2f19", 00:11:26.851 "assigned_rate_limits": { 00:11:26.851 "rw_ios_per_sec": 0, 00:11:26.851 "rw_mbytes_per_sec": 0, 00:11:26.851 "r_mbytes_per_sec": 0, 00:11:26.851 "w_mbytes_per_sec": 0 00:11:26.851 }, 00:11:26.851 "claimed": true, 00:11:26.851 "claim_type": "exclusive_write", 00:11:26.851 "zoned": false, 00:11:26.851 "supported_io_types": { 00:11:26.851 "read": true, 00:11:26.851 "write": true, 00:11:26.851 "unmap": true, 00:11:26.851 "flush": true, 00:11:26.851 "reset": true, 00:11:26.851 "nvme_admin": false, 00:11:26.851 "nvme_io": false, 00:11:26.851 "nvme_io_md": false, 00:11:26.851 "write_zeroes": true, 00:11:26.851 "zcopy": true, 00:11:26.851 "get_zone_info": false, 00:11:26.851 "zone_management": false, 00:11:26.851 "zone_append": false, 00:11:26.851 "compare": false, 00:11:26.851 "compare_and_write": false, 00:11:26.851 "abort": true, 00:11:26.851 "seek_hole": false, 00:11:26.851 "seek_data": false, 00:11:26.851 "copy": true, 00:11:26.851 "nvme_iov_md": false 00:11:26.851 }, 00:11:26.851 "memory_domains": [ 00:11:26.851 { 00:11:26.851 "dma_device_id": "system", 00:11:26.851 "dma_device_type": 1 00:11:26.851 }, 00:11:26.851 { 00:11:26.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.852 "dma_device_type": 2 00:11:26.852 } 00:11:26.852 ], 00:11:26.852 "driver_specific": {} 00:11:26.852 } 00:11:26.852 ]' 00:11:26.852 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:11:27.131 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:11:27.131 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:11:27.131 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:11:27.131 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:11:27.131 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:11:27.131 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:27.131 10:53:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:30.424 10:53:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:30.424 10:53:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:11:30.424 10:53:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:30.424 10:53:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:30.424 10:53:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:32.325 10:53:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:32.325 10:53:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:32.325 10:53:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:32.325 10:53:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:32.325 10:53:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:32.325 10:53:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:32.325 10:53:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:32.325 10:53:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:32.325 10:53:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:32.325 10:53:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:32.326 10:53:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:32.326 10:53:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:32.326 10:53:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:32.326 10:53:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:32.326 10:53:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:32.326 10:53:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:32.326 10:53:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:32.326 10:53:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:32.326 10:53:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:33.260 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:33.260 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:33.260 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:33.260 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:33.260 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.260 ************************************ 00:11:33.260 START TEST filesystem_ext4 00:11:33.260 ************************************ 00:11:33.260 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:33.260 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:33.260 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:33.260 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:33.260 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:33.260 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:33.260 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:33.260 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:33.260 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:33.260 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:33.260 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:33.260 mke2fs 1.47.0 (5-Feb-2023) 00:11:33.519 Discarding device blocks: 0/522240 done 00:11:33.519 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:33.519 Filesystem UUID: c965e06c-a9a6-4ff2-bb77-25069b839427 00:11:33.519 Superblock backups stored on blocks: 00:11:33.519 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:33.519 00:11:33.519 Allocating group tables: 0/64 done 00:11:33.519 Writing inode tables: 0/64 done 00:11:33.519 Creating journal (8192 blocks): done 00:11:33.519 Writing superblocks and filesystem accounting information: 0/64 done 00:11:33.519 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1366646 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:33.519 00:11:33.519 real 0m0.184s 00:11:33.519 user 0m0.023s 00:11:33.519 sys 0m0.064s 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:33.519 ************************************ 00:11:33.519 END TEST filesystem_ext4 00:11:33.519 ************************************ 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.519 ************************************ 00:11:33.519 START TEST filesystem_btrfs 00:11:33.519 ************************************ 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:33.519 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:33.778 btrfs-progs v6.8.1 00:11:33.778 See https://btrfs.readthedocs.io for more information. 00:11:33.778 00:11:33.778 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:33.778 NOTE: several default settings have changed in version 5.15, please make sure 00:11:33.778 this does not affect your deployments: 00:11:33.778 - DUP for metadata (-m dup) 00:11:33.778 - enabled no-holes (-O no-holes) 00:11:33.778 - enabled free-space-tree (-R free-space-tree) 00:11:33.778 00:11:33.778 Label: (null) 00:11:33.778 UUID: 2cdf66c1-a5d6-4613-a4f4-6cdfb3b1344d 00:11:33.778 Node size: 16384 00:11:33.778 Sector size: 4096 (CPU page size: 4096) 00:11:33.778 Filesystem size: 510.00MiB 00:11:33.778 Block group profiles: 00:11:33.778 Data: single 8.00MiB 00:11:33.778 Metadata: DUP 32.00MiB 00:11:33.778 System: DUP 8.00MiB 00:11:33.778 SSD detected: yes 00:11:33.778 Zoned device: no 00:11:33.778 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:33.778 Checksum: crc32c 00:11:33.778 Number of devices: 1 00:11:33.778 Devices: 00:11:33.778 ID SIZE PATH 00:11:33.778 1 510.00MiB /dev/nvme0n1p1 00:11:33.778 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1366646 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:33.778 00:11:33.778 real 0m0.234s 00:11:33.778 user 0m0.037s 00:11:33.778 sys 0m0.105s 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:33.778 ************************************ 00:11:33.778 END TEST filesystem_btrfs 00:11:33.778 ************************************ 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.778 ************************************ 00:11:33.778 START TEST filesystem_xfs 00:11:33.778 ************************************ 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:33.778 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:11:33.779 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:33.779 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:33.779 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:34.037 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:34.037 = sectsz=512 attr=2, projid32bit=1 00:11:34.037 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:34.037 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:34.037 data = bsize=4096 blocks=130560, imaxpct=25 00:11:34.037 = sunit=0 swidth=0 blks 00:11:34.037 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:34.037 log =internal log bsize=4096 blocks=16384, version=2 00:11:34.037 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:34.037 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:34.037 Discarding blocks...Done. 00:11:34.037 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:34.037 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:34.037 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:34.037 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:34.037 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:34.037 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:34.037 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:34.037 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:34.037 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1366646 00:11:34.037 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:34.037 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:34.037 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:34.037 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:34.037 00:11:34.037 real 0m0.203s 00:11:34.037 user 0m0.026s 00:11:34.037 sys 0m0.065s 00:11:34.037 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:34.037 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:34.037 ************************************ 00:11:34.037 END TEST filesystem_xfs 00:11:34.037 ************************************ 00:11:34.037 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:34.037 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:34.037 10:53:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:36.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1366646 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 1366646 ']' 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 1366646 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1366646 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1366646' 00:11:36.567 killing process with pid 1366646 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 1366646 00:11:36.567 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 1366646 00:11:36.825 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:36.825 00:11:36.825 real 0m10.657s 00:11:36.825 user 0m41.891s 00:11:36.825 sys 0m1.120s 00:11:36.825 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:36.825 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.825 ************************************ 00:11:36.825 END TEST nvmf_filesystem_no_in_capsule 00:11:36.825 ************************************ 00:11:36.825 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:36.825 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:36.825 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:36.825 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:37.083 ************************************ 00:11:37.083 START TEST nvmf_filesystem_in_capsule 00:11:37.084 ************************************ 00:11:37.084 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:11:37.084 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:37.084 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:37.084 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:37.084 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:37.084 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.084 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1368733 00:11:37.084 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1368733 00:11:37.084 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 1368733 ']' 00:11:37.084 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.084 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:37.084 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.084 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:37.084 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:37.084 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.084 [2024-11-15 10:53:25.796062] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:11:37.084 [2024-11-15 10:53:25.796101] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.084 [2024-11-15 10:53:25.857854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.084 [2024-11-15 10:53:25.900995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.084 [2024-11-15 10:53:25.901031] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.084 [2024-11-15 10:53:25.901038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.084 [2024-11-15 10:53:25.901044] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.084 [2024-11-15 10:53:25.901048] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.084 [2024-11-15 10:53:25.902563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.084 [2024-11-15 10:53:25.902663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.084 [2024-11-15 10:53:25.902769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.084 [2024-11-15 10:53:25.902770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.342 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:37.342 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:11:37.342 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:37.342 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:37.342 10:53:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.342 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.342 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:37.342 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:11:37.342 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.342 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.342 [2024-11-15 10:53:26.058538] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe9a230/0xe9e720) succeed. 00:11:37.342 [2024-11-15 10:53:26.067825] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe9b8c0/0xedfdc0) succeed. 00:11:37.342 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.342 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:37.342 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.342 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.601 Malloc1 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.601 [2024-11-15 10:53:26.345236] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:11:37.601 { 00:11:37.601 "name": "Malloc1", 00:11:37.601 "aliases": [ 00:11:37.601 "9f7a02bd-1474-4685-a1a8-acfad4b0e597" 00:11:37.601 ], 00:11:37.601 "product_name": "Malloc disk", 00:11:37.601 "block_size": 512, 00:11:37.601 "num_blocks": 1048576, 00:11:37.601 "uuid": "9f7a02bd-1474-4685-a1a8-acfad4b0e597", 00:11:37.601 "assigned_rate_limits": { 00:11:37.601 "rw_ios_per_sec": 0, 00:11:37.601 "rw_mbytes_per_sec": 0, 00:11:37.601 "r_mbytes_per_sec": 0, 00:11:37.601 "w_mbytes_per_sec": 0 00:11:37.601 }, 00:11:37.601 "claimed": true, 00:11:37.601 "claim_type": "exclusive_write", 00:11:37.601 "zoned": false, 00:11:37.601 "supported_io_types": { 00:11:37.601 "read": true, 00:11:37.601 "write": true, 00:11:37.601 "unmap": true, 00:11:37.601 "flush": true, 00:11:37.601 "reset": true, 00:11:37.601 "nvme_admin": false, 00:11:37.601 "nvme_io": false, 00:11:37.601 "nvme_io_md": false, 00:11:37.601 "write_zeroes": true, 00:11:37.601 "zcopy": true, 00:11:37.601 "get_zone_info": false, 00:11:37.601 "zone_management": false, 00:11:37.601 "zone_append": false, 00:11:37.601 "compare": false, 00:11:37.601 "compare_and_write": false, 00:11:37.601 "abort": true, 00:11:37.601 "seek_hole": false, 00:11:37.601 "seek_data": false, 00:11:37.601 "copy": true, 00:11:37.601 "nvme_iov_md": false 00:11:37.601 }, 00:11:37.601 "memory_domains": [ 00:11:37.601 { 00:11:37.601 "dma_device_id": "system", 00:11:37.601 "dma_device_type": 1 00:11:37.601 }, 00:11:37.601 { 00:11:37.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.601 "dma_device_type": 2 00:11:37.601 } 00:11:37.601 ], 00:11:37.601 "driver_specific": {} 00:11:37.601 } 00:11:37.601 ]' 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:37.601 10:53:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:40.883 10:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:40.883 10:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:11:40.883 10:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.883 10:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:40.883 10:53:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:42.785 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:42.785 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:42.785 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.785 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:42.785 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.785 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:42.785 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:42.785 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:42.785 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:42.785 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:42.785 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:42.785 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:42.785 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:42.785 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:42.785 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:42.785 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:42.785 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:42.785 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:43.043 10:53:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:43.980 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:43.980 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:43.980 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:43.980 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:43.980 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.980 ************************************ 00:11:43.980 START TEST filesystem_in_capsule_ext4 00:11:43.980 ************************************ 00:11:43.980 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:43.980 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:43.980 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:43.980 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:43.980 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:43.980 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:43.980 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:43.980 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:43.980 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:43.980 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:43.980 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:43.980 mke2fs 1.47.0 (5-Feb-2023) 00:11:44.239 Discarding device blocks: 0/522240 done 00:11:44.239 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:44.239 Filesystem UUID: b576274d-c501-4142-9885-cce0baf6453b 00:11:44.239 Superblock backups stored on blocks: 00:11:44.239 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:44.239 00:11:44.239 Allocating group tables: 0/64 done 00:11:44.239 Writing inode tables: 0/64 done 00:11:44.239 Creating journal (8192 blocks): done 00:11:44.239 Writing superblocks and filesystem accounting information: 0/64 done 00:11:44.239 00:11:44.239 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:44.239 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:44.239 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:44.239 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:44.239 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:44.239 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:44.239 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:44.239 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:44.240 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1368733 00:11:44.240 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:44.240 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:44.240 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:44.240 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:44.240 00:11:44.240 real 0m0.185s 00:11:44.240 user 0m0.018s 00:11:44.240 sys 0m0.069s 00:11:44.240 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:44.240 10:53:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:44.240 ************************************ 00:11:44.240 END TEST filesystem_in_capsule_ext4 00:11:44.240 ************************************ 00:11:44.240 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:44.240 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:44.240 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:44.240 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.240 ************************************ 00:11:44.240 START TEST filesystem_in_capsule_btrfs 00:11:44.240 ************************************ 00:11:44.240 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:44.240 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:44.240 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:44.240 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:44.240 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:44.240 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:44.240 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:44.240 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:44.240 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:44.240 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:44.240 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:44.499 btrfs-progs v6.8.1 00:11:44.499 See https://btrfs.readthedocs.io for more information. 00:11:44.499 00:11:44.499 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:44.499 NOTE: several default settings have changed in version 5.15, please make sure 00:11:44.499 this does not affect your deployments: 00:11:44.499 - DUP for metadata (-m dup) 00:11:44.499 - enabled no-holes (-O no-holes) 00:11:44.499 - enabled free-space-tree (-R free-space-tree) 00:11:44.499 00:11:44.499 Label: (null) 00:11:44.499 UUID: 758100eb-0561-46f0-8c80-de653ede8f10 00:11:44.499 Node size: 16384 00:11:44.499 Sector size: 4096 (CPU page size: 4096) 00:11:44.499 Filesystem size: 510.00MiB 00:11:44.499 Block group profiles: 00:11:44.499 Data: single 8.00MiB 00:11:44.499 Metadata: DUP 32.00MiB 00:11:44.499 System: DUP 8.00MiB 00:11:44.499 SSD detected: yes 00:11:44.499 Zoned device: no 00:11:44.499 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:44.499 Checksum: crc32c 00:11:44.499 Number of devices: 1 00:11:44.499 Devices: 00:11:44.499 ID SIZE PATH 00:11:44.499 1 510.00MiB /dev/nvme0n1p1 00:11:44.499 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1368733 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:44.499 00:11:44.499 real 0m0.233s 00:11:44.499 user 0m0.024s 00:11:44.499 sys 0m0.115s 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:44.499 ************************************ 00:11:44.499 END TEST filesystem_in_capsule_btrfs 00:11:44.499 ************************************ 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.499 ************************************ 00:11:44.499 START TEST filesystem_in_capsule_xfs 00:11:44.499 ************************************ 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:44.499 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:44.758 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:44.758 = sectsz=512 attr=2, projid32bit=1 00:11:44.758 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:44.758 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:44.758 data = bsize=4096 blocks=130560, imaxpct=25 00:11:44.758 = sunit=0 swidth=0 blks 00:11:44.758 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:44.758 log =internal log bsize=4096 blocks=16384, version=2 00:11:44.758 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:44.758 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:44.758 Discarding blocks...Done. 00:11:44.758 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:44.758 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:44.758 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:44.758 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:44.758 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:44.758 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:44.759 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:44.759 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:44.759 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1368733 00:11:44.759 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:44.759 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:44.759 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:44.759 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:44.759 00:11:44.759 real 0m0.192s 00:11:44.759 user 0m0.024s 00:11:44.759 sys 0m0.069s 00:11:44.759 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:44.759 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:44.759 ************************************ 00:11:44.759 END TEST filesystem_in_capsule_xfs 00:11:44.759 ************************************ 00:11:44.759 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:44.759 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:44.759 10:53:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.291 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:47.291 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:47.291 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:47.291 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.291 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:47.291 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.291 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:47.291 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.291 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.291 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.291 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.291 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:47.291 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1368733 00:11:47.291 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 1368733 ']' 00:11:47.291 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 1368733 00:11:47.291 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:47.291 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:47.291 10:53:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1368733 00:11:47.292 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:47.292 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:47.292 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1368733' 00:11:47.292 killing process with pid 1368733 00:11:47.292 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 1368733 00:11:47.292 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 1368733 00:11:47.550 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:47.550 00:11:47.550 real 0m10.683s 00:11:47.550 user 0m41.914s 00:11:47.550 sys 0m1.159s 00:11:47.550 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:47.550 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.550 ************************************ 00:11:47.550 END TEST nvmf_filesystem_in_capsule 00:11:47.550 ************************************ 00:11:47.809 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:47.809 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:47.809 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:47.809 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:47.809 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:47.809 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:47.809 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:47.809 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:47.809 rmmod nvme_rdma 00:11:47.809 rmmod nvme_fabrics 00:11:47.809 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:47.809 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:47.809 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:47.809 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:47.809 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:47.809 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:47.809 00:11:47.809 real 0m27.480s 00:11:47.809 user 1m25.700s 00:11:47.809 sys 0m6.667s 00:11:47.809 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:47.809 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:47.809 ************************************ 00:11:47.809 END TEST nvmf_filesystem 00:11:47.809 ************************************ 00:11:47.810 10:53:36 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:11:47.810 10:53:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:47.810 10:53:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:47.810 10:53:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:47.810 ************************************ 00:11:47.810 START TEST nvmf_target_discovery 00:11:47.810 ************************************ 00:11:47.810 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:11:47.810 * Looking for test storage... 00:11:47.810 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:47.810 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:47.810 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:47.810 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.069 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:48.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.070 --rc genhtml_branch_coverage=1 00:11:48.070 --rc genhtml_function_coverage=1 00:11:48.070 --rc genhtml_legend=1 00:11:48.070 --rc geninfo_all_blocks=1 00:11:48.070 --rc geninfo_unexecuted_blocks=1 00:11:48.070 00:11:48.070 ' 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:48.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.070 --rc genhtml_branch_coverage=1 00:11:48.070 --rc genhtml_function_coverage=1 00:11:48.070 --rc genhtml_legend=1 00:11:48.070 --rc geninfo_all_blocks=1 00:11:48.070 --rc geninfo_unexecuted_blocks=1 00:11:48.070 00:11:48.070 ' 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:48.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.070 --rc genhtml_branch_coverage=1 00:11:48.070 --rc genhtml_function_coverage=1 00:11:48.070 --rc genhtml_legend=1 00:11:48.070 --rc geninfo_all_blocks=1 00:11:48.070 --rc geninfo_unexecuted_blocks=1 00:11:48.070 00:11:48.070 ' 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:48.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.070 --rc genhtml_branch_coverage=1 00:11:48.070 --rc genhtml_function_coverage=1 00:11:48.070 --rc genhtml_legend=1 00:11:48.070 --rc geninfo_all_blocks=1 00:11:48.070 --rc geninfo_unexecuted_blocks=1 00:11:48.070 00:11:48.070 ' 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:48.070 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:48.070 10:53:36 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:11:53.336 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:11:53.336 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.336 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:11:53.337 Found net devices under 0000:af:00.0: mlx_0_0 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:11:53.337 Found net devices under 0000:af:00.1: mlx_0_1 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # rdma_device_init 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:53.337 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:53.337 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:11:53.337 altname enp175s0f0np0 00:11:53.337 altname ens801f0np0 00:11:53.337 inet 192.168.100.8/24 scope global mlx_0_0 00:11:53.337 valid_lft forever preferred_lft forever 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:53.337 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:53.337 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:11:53.337 altname enp175s0f1np1 00:11:53.337 altname ens801f1np1 00:11:53.337 inet 192.168.100.9/24 scope global mlx_0_1 00:11:53.337 valid_lft forever preferred_lft forever 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:53.337 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:53.595 192.168.100.9' 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:53.595 192.168.100.9' 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # head -n 1 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:53.595 192.168.100.9' 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # tail -n +2 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # head -n 1 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1373828 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1373828 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 1373828 ']' 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:53.595 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.595 [2024-11-15 10:53:42.371819] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:11:53.595 [2024-11-15 10:53:42.371871] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.595 [2024-11-15 10:53:42.438382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.595 [2024-11-15 10:53:42.479575] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.595 [2024-11-15 10:53:42.479612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.595 [2024-11-15 10:53:42.479623] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.595 [2024-11-15 10:53:42.479629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.595 [2024-11-15 10:53:42.479635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.595 [2024-11-15 10:53:42.481305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.595 [2024-11-15 10:53:42.481419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.595 [2024-11-15 10:53:42.481489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.595 [2024-11-15 10:53:42.481491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.853 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:53.853 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:11:53.853 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:53.853 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:53.853 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.853 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.853 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:53.853 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.853 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.853 [2024-11-15 10:53:42.648714] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2082230/0x2086720) succeed. 00:11:53.854 [2024-11-15 10:53:42.658038] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20838c0/0x20c7dc0) succeed. 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.112 Null1 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.112 [2024-11-15 10:53:42.825014] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.112 Null2 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.112 Null3 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.112 Null4 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.112 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:11:54.113 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.113 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.113 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.113 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:54.113 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.113 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.113 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.113 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:11:54.113 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.113 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.113 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.113 10:53:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:11:54.371 00:11:54.371 Discovery Log Number of Records 6, Generation counter 6 00:11:54.371 =====Discovery Log Entry 0====== 00:11:54.371 trtype: rdma 00:11:54.371 adrfam: ipv4 00:11:54.371 subtype: current discovery subsystem 00:11:54.371 treq: not required 00:11:54.371 portid: 0 00:11:54.371 trsvcid: 4420 00:11:54.371 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:54.371 traddr: 192.168.100.8 00:11:54.371 eflags: explicit discovery connections, duplicate discovery information 00:11:54.371 rdma_prtype: not specified 00:11:54.371 rdma_qptype: connected 00:11:54.371 rdma_cms: rdma-cm 00:11:54.371 rdma_pkey: 0x0000 00:11:54.371 =====Discovery Log Entry 1====== 00:11:54.371 trtype: rdma 00:11:54.371 adrfam: ipv4 00:11:54.371 subtype: nvme subsystem 00:11:54.371 treq: not required 00:11:54.371 portid: 0 00:11:54.371 trsvcid: 4420 00:11:54.371 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:54.371 traddr: 192.168.100.8 00:11:54.371 eflags: none 00:11:54.371 rdma_prtype: not specified 00:11:54.371 rdma_qptype: connected 00:11:54.371 rdma_cms: rdma-cm 00:11:54.371 rdma_pkey: 0x0000 00:11:54.371 =====Discovery Log Entry 2====== 00:11:54.371 trtype: rdma 00:11:54.371 adrfam: ipv4 00:11:54.371 subtype: nvme subsystem 00:11:54.371 treq: not required 00:11:54.371 portid: 0 00:11:54.371 trsvcid: 4420 00:11:54.371 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:54.371 traddr: 192.168.100.8 00:11:54.371 eflags: none 00:11:54.371 rdma_prtype: not specified 00:11:54.371 rdma_qptype: connected 00:11:54.371 rdma_cms: rdma-cm 00:11:54.371 rdma_pkey: 0x0000 00:11:54.371 =====Discovery Log Entry 3====== 00:11:54.371 trtype: rdma 00:11:54.372 adrfam: ipv4 00:11:54.372 subtype: nvme subsystem 00:11:54.372 treq: not required 00:11:54.372 portid: 0 00:11:54.372 trsvcid: 4420 00:11:54.372 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:54.372 traddr: 192.168.100.8 00:11:54.372 eflags: none 00:11:54.372 rdma_prtype: not specified 00:11:54.372 rdma_qptype: connected 00:11:54.372 rdma_cms: rdma-cm 00:11:54.372 rdma_pkey: 0x0000 00:11:54.372 =====Discovery Log Entry 4====== 00:11:54.372 trtype: rdma 00:11:54.372 adrfam: ipv4 00:11:54.372 subtype: nvme subsystem 00:11:54.372 treq: not required 00:11:54.372 portid: 0 00:11:54.372 trsvcid: 4420 00:11:54.372 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:54.372 traddr: 192.168.100.8 00:11:54.372 eflags: none 00:11:54.372 rdma_prtype: not specified 00:11:54.372 rdma_qptype: connected 00:11:54.372 rdma_cms: rdma-cm 00:11:54.372 rdma_pkey: 0x0000 00:11:54.372 =====Discovery Log Entry 5====== 00:11:54.372 trtype: rdma 00:11:54.372 adrfam: ipv4 00:11:54.372 subtype: discovery subsystem referral 00:11:54.372 treq: not required 00:11:54.372 portid: 0 00:11:54.372 trsvcid: 4430 00:11:54.372 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:54.372 traddr: 192.168.100.8 00:11:54.372 eflags: none 00:11:54.372 rdma_prtype: unrecognized 00:11:54.372 rdma_qptype: unrecognized 00:11:54.372 rdma_cms: unrecognized 00:11:54.372 rdma_pkey: 0x0000 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:54.372 Perform nvmf subsystem discovery via RPC 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.372 [ 00:11:54.372 { 00:11:54.372 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:54.372 "subtype": "Discovery", 00:11:54.372 "listen_addresses": [ 00:11:54.372 { 00:11:54.372 "trtype": "RDMA", 00:11:54.372 "adrfam": "IPv4", 00:11:54.372 "traddr": "192.168.100.8", 00:11:54.372 "trsvcid": "4420" 00:11:54.372 } 00:11:54.372 ], 00:11:54.372 "allow_any_host": true, 00:11:54.372 "hosts": [] 00:11:54.372 }, 00:11:54.372 { 00:11:54.372 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:54.372 "subtype": "NVMe", 00:11:54.372 "listen_addresses": [ 00:11:54.372 { 00:11:54.372 "trtype": "RDMA", 00:11:54.372 "adrfam": "IPv4", 00:11:54.372 "traddr": "192.168.100.8", 00:11:54.372 "trsvcid": "4420" 00:11:54.372 } 00:11:54.372 ], 00:11:54.372 "allow_any_host": true, 00:11:54.372 "hosts": [], 00:11:54.372 "serial_number": "SPDK00000000000001", 00:11:54.372 "model_number": "SPDK bdev Controller", 00:11:54.372 "max_namespaces": 32, 00:11:54.372 "min_cntlid": 1, 00:11:54.372 "max_cntlid": 65519, 00:11:54.372 "namespaces": [ 00:11:54.372 { 00:11:54.372 "nsid": 1, 00:11:54.372 "bdev_name": "Null1", 00:11:54.372 "name": "Null1", 00:11:54.372 "nguid": "127BD94783C94C95A29F79910A9E13CF", 00:11:54.372 "uuid": "127bd947-83c9-4c95-a29f-79910a9e13cf" 00:11:54.372 } 00:11:54.372 ] 00:11:54.372 }, 00:11:54.372 { 00:11:54.372 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:54.372 "subtype": "NVMe", 00:11:54.372 "listen_addresses": [ 00:11:54.372 { 00:11:54.372 "trtype": "RDMA", 00:11:54.372 "adrfam": "IPv4", 00:11:54.372 "traddr": "192.168.100.8", 00:11:54.372 "trsvcid": "4420" 00:11:54.372 } 00:11:54.372 ], 00:11:54.372 "allow_any_host": true, 00:11:54.372 "hosts": [], 00:11:54.372 "serial_number": "SPDK00000000000002", 00:11:54.372 "model_number": "SPDK bdev Controller", 00:11:54.372 "max_namespaces": 32, 00:11:54.372 "min_cntlid": 1, 00:11:54.372 "max_cntlid": 65519, 00:11:54.372 "namespaces": [ 00:11:54.372 { 00:11:54.372 "nsid": 1, 00:11:54.372 "bdev_name": "Null2", 00:11:54.372 "name": "Null2", 00:11:54.372 "nguid": "467CB1866B7444EE8C81ECF976A81137", 00:11:54.372 "uuid": "467cb186-6b74-44ee-8c81-ecf976a81137" 00:11:54.372 } 00:11:54.372 ] 00:11:54.372 }, 00:11:54.372 { 00:11:54.372 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:54.372 "subtype": "NVMe", 00:11:54.372 "listen_addresses": [ 00:11:54.372 { 00:11:54.372 "trtype": "RDMA", 00:11:54.372 "adrfam": "IPv4", 00:11:54.372 "traddr": "192.168.100.8", 00:11:54.372 "trsvcid": "4420" 00:11:54.372 } 00:11:54.372 ], 00:11:54.372 "allow_any_host": true, 00:11:54.372 "hosts": [], 00:11:54.372 "serial_number": "SPDK00000000000003", 00:11:54.372 "model_number": "SPDK bdev Controller", 00:11:54.372 "max_namespaces": 32, 00:11:54.372 "min_cntlid": 1, 00:11:54.372 "max_cntlid": 65519, 00:11:54.372 "namespaces": [ 00:11:54.372 { 00:11:54.372 "nsid": 1, 00:11:54.372 "bdev_name": "Null3", 00:11:54.372 "name": "Null3", 00:11:54.372 "nguid": "B28F479769E741BE992D287A89AEFD2A", 00:11:54.372 "uuid": "b28f4797-69e7-41be-992d-287a89aefd2a" 00:11:54.372 } 00:11:54.372 ] 00:11:54.372 }, 00:11:54.372 { 00:11:54.372 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:54.372 "subtype": "NVMe", 00:11:54.372 "listen_addresses": [ 00:11:54.372 { 00:11:54.372 "trtype": "RDMA", 00:11:54.372 "adrfam": "IPv4", 00:11:54.372 "traddr": "192.168.100.8", 00:11:54.372 "trsvcid": "4420" 00:11:54.372 } 00:11:54.372 ], 00:11:54.372 "allow_any_host": true, 00:11:54.372 "hosts": [], 00:11:54.372 "serial_number": "SPDK00000000000004", 00:11:54.372 "model_number": "SPDK bdev Controller", 00:11:54.372 "max_namespaces": 32, 00:11:54.372 "min_cntlid": 1, 00:11:54.372 "max_cntlid": 65519, 00:11:54.372 "namespaces": [ 00:11:54.372 { 00:11:54.372 "nsid": 1, 00:11:54.372 "bdev_name": "Null4", 00:11:54.372 "name": "Null4", 00:11:54.372 "nguid": "FCC16DFC275949D782135A6686A1B941", 00:11:54.372 "uuid": "fcc16dfc-2759-49d7-8213-5a6686a1b941" 00:11:54.372 } 00:11:54.372 ] 00:11:54.372 } 00:11:54.372 ] 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.372 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:54.373 rmmod nvme_rdma 00:11:54.373 rmmod nvme_fabrics 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1373828 ']' 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1373828 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 1373828 ']' 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 1373828 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:54.373 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1373828 00:11:54.631 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:54.631 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:54.631 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1373828' 00:11:54.631 killing process with pid 1373828 00:11:54.631 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 1373828 00:11:54.631 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 1373828 00:11:54.631 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:54.631 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:54.631 00:11:54.631 real 0m6.929s 00:11:54.631 user 0m5.712s 00:11:54.631 sys 0m4.556s 00:11:54.631 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:54.631 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.631 ************************************ 00:11:54.631 END TEST nvmf_target_discovery 00:11:54.631 ************************************ 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:54.890 ************************************ 00:11:54.890 START TEST nvmf_referrals 00:11:54.890 ************************************ 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:11:54.890 * Looking for test storage... 00:11:54.890 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:54.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.890 --rc genhtml_branch_coverage=1 00:11:54.890 --rc genhtml_function_coverage=1 00:11:54.890 --rc genhtml_legend=1 00:11:54.890 --rc geninfo_all_blocks=1 00:11:54.890 --rc geninfo_unexecuted_blocks=1 00:11:54.890 00:11:54.890 ' 00:11:54.890 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:54.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.890 --rc genhtml_branch_coverage=1 00:11:54.890 --rc genhtml_function_coverage=1 00:11:54.890 --rc genhtml_legend=1 00:11:54.890 --rc geninfo_all_blocks=1 00:11:54.891 --rc geninfo_unexecuted_blocks=1 00:11:54.891 00:11:54.891 ' 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:54.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.891 --rc genhtml_branch_coverage=1 00:11:54.891 --rc genhtml_function_coverage=1 00:11:54.891 --rc genhtml_legend=1 00:11:54.891 --rc geninfo_all_blocks=1 00:11:54.891 --rc geninfo_unexecuted_blocks=1 00:11:54.891 00:11:54.891 ' 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:54.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.891 --rc genhtml_branch_coverage=1 00:11:54.891 --rc genhtml_function_coverage=1 00:11:54.891 --rc genhtml_legend=1 00:11:54.891 --rc geninfo_all_blocks=1 00:11:54.891 --rc geninfo_unexecuted_blocks=1 00:11:54.891 00:11:54.891 ' 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:54.891 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:54.891 10:53:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.157 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.157 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:00.157 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:00.157 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:00.157 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:00.157 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:12:00.158 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:12:00.158 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:12:00.158 Found net devices under 0000:af:00.0: mlx_0_0 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:12:00.158 Found net devices under 0000:af:00.1: mlx_0_1 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # rdma_device_init 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:00.158 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:00.158 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:12:00.158 altname enp175s0f0np0 00:12:00.158 altname ens801f0np0 00:12:00.158 inet 192.168.100.8/24 scope global mlx_0_0 00:12:00.158 valid_lft forever preferred_lft forever 00:12:00.158 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:00.159 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:00.159 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:12:00.159 altname enp175s0f1np1 00:12:00.159 altname ens801f1np1 00:12:00.159 inet 192.168.100.9/24 scope global mlx_0_1 00:12:00.159 valid_lft forever preferred_lft forever 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:00.159 192.168.100.9' 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:00.159 192.168.100.9' 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # head -n 1 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:00.159 192.168.100.9' 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # tail -n +2 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # head -n 1 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1377046 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1377046 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 1377046 ']' 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.159 [2024-11-15 10:53:48.792479] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:12:00.159 [2024-11-15 10:53:48.792528] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.159 [2024-11-15 10:53:48.854025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.159 [2024-11-15 10:53:48.894508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.159 [2024-11-15 10:53:48.894547] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.159 [2024-11-15 10:53:48.894554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.159 [2024-11-15 10:53:48.894559] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.159 [2024-11-15 10:53:48.894565] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.159 [2024-11-15 10:53:48.896034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.159 [2024-11-15 10:53:48.896132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.159 [2024-11-15 10:53:48.896196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.159 [2024-11-15 10:53:48.896198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:00.159 10:53:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.159 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.159 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:00.159 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.159 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.418 [2024-11-15 10:53:49.062046] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x98d230/0x991720) succeed. 00:12:00.418 [2024-11-15 10:53:49.071326] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x98e8c0/0x9d2dc0) succeed. 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.418 [2024-11-15 10:53:49.210773] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.418 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:00.677 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:00.935 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:00.935 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:00.935 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:12:00.935 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:00.936 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:01.194 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:01.194 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:01.194 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:01.194 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:01.194 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:01.194 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:01.194 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:01.194 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:01.194 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.194 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.194 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.194 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:01.194 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:01.194 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:01.194 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:01.194 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.194 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:01.194 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.194 10:53:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.194 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:01.194 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:01.194 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:01.194 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:01.194 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:01.194 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:01.194 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:01.194 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.453 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:01.712 rmmod nvme_rdma 00:12:01.712 rmmod nvme_fabrics 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1377046 ']' 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1377046 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 1377046 ']' 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 1377046 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1377046 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1377046' 00:12:01.712 killing process with pid 1377046 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 1377046 00:12:01.712 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 1377046 00:12:01.971 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:01.971 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:01.971 00:12:01.971 real 0m7.227s 00:12:01.971 user 0m9.529s 00:12:01.971 sys 0m4.496s 00:12:01.971 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:01.971 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.971 ************************************ 00:12:01.971 END TEST nvmf_referrals 00:12:01.971 ************************************ 00:12:01.971 10:53:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:12:01.971 10:53:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:01.971 10:53:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:01.971 10:53:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:02.231 ************************************ 00:12:02.231 START TEST nvmf_connect_disconnect 00:12:02.231 ************************************ 00:12:02.231 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:12:02.231 * Looking for test storage... 00:12:02.231 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:02.231 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:02.231 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:12:02.231 10:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:02.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.231 --rc genhtml_branch_coverage=1 00:12:02.231 --rc genhtml_function_coverage=1 00:12:02.231 --rc genhtml_legend=1 00:12:02.231 --rc geninfo_all_blocks=1 00:12:02.231 --rc geninfo_unexecuted_blocks=1 00:12:02.231 00:12:02.231 ' 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:02.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.231 --rc genhtml_branch_coverage=1 00:12:02.231 --rc genhtml_function_coverage=1 00:12:02.231 --rc genhtml_legend=1 00:12:02.231 --rc geninfo_all_blocks=1 00:12:02.231 --rc geninfo_unexecuted_blocks=1 00:12:02.231 00:12:02.231 ' 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:02.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.231 --rc genhtml_branch_coverage=1 00:12:02.231 --rc genhtml_function_coverage=1 00:12:02.231 --rc genhtml_legend=1 00:12:02.231 --rc geninfo_all_blocks=1 00:12:02.231 --rc geninfo_unexecuted_blocks=1 00:12:02.231 00:12:02.231 ' 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:02.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.231 --rc genhtml_branch_coverage=1 00:12:02.231 --rc genhtml_function_coverage=1 00:12:02.231 --rc genhtml_legend=1 00:12:02.231 --rc geninfo_all_blocks=1 00:12:02.231 --rc geninfo_unexecuted_blocks=1 00:12:02.231 00:12:02.231 ' 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.231 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:02.232 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:02.232 10:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.502 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:12:07.503 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:12:07.503 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:12:07.503 Found net devices under 0000:af:00.0: mlx_0_0 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:12:07.503 Found net devices under 0000:af:00.1: mlx_0_1 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:07.503 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:07.762 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:07.762 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:12:07.762 altname enp175s0f0np0 00:12:07.762 altname ens801f0np0 00:12:07.762 inet 192.168.100.8/24 scope global mlx_0_0 00:12:07.762 valid_lft forever preferred_lft forever 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:07.762 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:07.762 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:07.763 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:12:07.763 altname enp175s0f1np1 00:12:07.763 altname ens801f1np1 00:12:07.763 inet 192.168.100.9/24 scope global mlx_0_1 00:12:07.763 valid_lft forever preferred_lft forever 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:07.763 192.168.100.9' 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:07.763 192.168.100.9' 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:07.763 192.168.100.9' 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1380700 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1380700 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 1380700 ']' 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:07.763 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.763 [2024-11-15 10:53:56.568577] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:12:07.763 [2024-11-15 10:53:56.568631] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.763 [2024-11-15 10:53:56.632138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.049 [2024-11-15 10:53:56.676579] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.049 [2024-11-15 10:53:56.676612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.049 [2024-11-15 10:53:56.676619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.049 [2024-11-15 10:53:56.676625] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.049 [2024-11-15 10:53:56.676629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.049 [2024-11-15 10:53:56.678302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.049 [2024-11-15 10:53:56.678332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.049 [2024-11-15 10:53:56.678424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.049 [2024-11-15 10:53:56.678427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.049 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:08.049 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:12:08.049 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:08.049 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:08.049 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.049 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.049 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:12:08.049 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.049 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.049 [2024-11-15 10:53:56.829143] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:12:08.049 [2024-11-15 10:53:56.849567] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d9e230/0x1da2720) succeed. 00:12:08.049 [2024-11-15 10:53:56.859020] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d9f8c0/0x1de3dc0) succeed. 00:12:08.371 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.371 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:08.371 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.371 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.371 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.371 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:08.371 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:08.371 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.371 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.371 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.371 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:08.371 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.371 10:53:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.371 10:53:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.371 10:53:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:08.371 10:53:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.371 10:53:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.371 [2024-11-15 10:53:57.014504] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:08.371 10:53:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.371 10:53:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:08.371 10:53:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:08.371 10:53:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:16.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.818 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:45.818 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:45.818 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:45.818 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:45.818 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:45.818 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:45.818 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:45.818 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:45.818 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:45.818 rmmod nvme_rdma 00:12:45.818 rmmod nvme_fabrics 00:12:45.818 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:45.818 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:45.818 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:45.818 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1380700 ']' 00:12:45.818 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1380700 00:12:45.818 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 1380700 ']' 00:12:45.818 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 1380700 00:12:45.818 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:12:45.818 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:45.818 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1380700 00:12:45.818 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:45.818 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:45.818 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1380700' 00:12:45.818 killing process with pid 1380700 00:12:45.819 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 1380700 00:12:45.819 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 1380700 00:12:45.819 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:45.819 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:45.819 00:12:45.819 real 0m43.683s 00:12:45.819 user 2m32.508s 00:12:45.819 sys 0m5.473s 00:12:45.819 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:45.819 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.819 ************************************ 00:12:45.819 END TEST nvmf_connect_disconnect 00:12:45.819 ************************************ 00:12:45.819 10:54:34 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:12:45.819 10:54:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:45.819 10:54:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:45.819 10:54:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:45.819 ************************************ 00:12:45.819 START TEST nvmf_multitarget 00:12:45.819 ************************************ 00:12:45.819 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:12:45.819 * Looking for test storage... 00:12:45.819 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:45.819 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:45.819 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:12:45.819 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:46.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.077 --rc genhtml_branch_coverage=1 00:12:46.077 --rc genhtml_function_coverage=1 00:12:46.077 --rc genhtml_legend=1 00:12:46.077 --rc geninfo_all_blocks=1 00:12:46.077 --rc geninfo_unexecuted_blocks=1 00:12:46.077 00:12:46.077 ' 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:46.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.077 --rc genhtml_branch_coverage=1 00:12:46.077 --rc genhtml_function_coverage=1 00:12:46.077 --rc genhtml_legend=1 00:12:46.077 --rc geninfo_all_blocks=1 00:12:46.077 --rc geninfo_unexecuted_blocks=1 00:12:46.077 00:12:46.077 ' 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:46.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.077 --rc genhtml_branch_coverage=1 00:12:46.077 --rc genhtml_function_coverage=1 00:12:46.077 --rc genhtml_legend=1 00:12:46.077 --rc geninfo_all_blocks=1 00:12:46.077 --rc geninfo_unexecuted_blocks=1 00:12:46.077 00:12:46.077 ' 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:46.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.077 --rc genhtml_branch_coverage=1 00:12:46.077 --rc genhtml_function_coverage=1 00:12:46.077 --rc genhtml_legend=1 00:12:46.077 --rc geninfo_all_blocks=1 00:12:46.077 --rc geninfo_unexecuted_blocks=1 00:12:46.077 00:12:46.077 ' 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:46.077 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.077 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:46.078 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:46.078 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:46.078 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.078 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.078 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.078 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:46.078 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:46.078 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:46.078 10:54:34 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:12:51.556 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:51.556 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:12:51.557 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:12:51.557 Found net devices under 0000:af:00.0: mlx_0_0 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:12:51.557 Found net devices under 0000:af:00.1: mlx_0_1 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # rdma_device_init 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:51.557 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:51.557 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:12:51.557 altname enp175s0f0np0 00:12:51.557 altname ens801f0np0 00:12:51.557 inet 192.168.100.8/24 scope global mlx_0_0 00:12:51.557 valid_lft forever preferred_lft forever 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:51.557 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:51.557 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:12:51.557 altname enp175s0f1np1 00:12:51.557 altname ens801f1np1 00:12:51.557 inet 192.168.100.9/24 scope global mlx_0_1 00:12:51.557 valid_lft forever preferred_lft forever 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:51.557 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:51.558 192.168.100.9' 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:51.558 192.168.100.9' 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # head -n 1 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:51.558 192.168.100.9' 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # tail -n +2 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # head -n 1 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1390776 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1390776 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 1390776 ']' 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:51.558 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:51.558 [2024-11-15 10:54:40.424023] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:12:51.558 [2024-11-15 10:54:40.424074] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.816 [2024-11-15 10:54:40.487489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.816 [2024-11-15 10:54:40.529132] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.816 [2024-11-15 10:54:40.529178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.816 [2024-11-15 10:54:40.529186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.816 [2024-11-15 10:54:40.529192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.816 [2024-11-15 10:54:40.529198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.816 [2024-11-15 10:54:40.530719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.816 [2024-11-15 10:54:40.530819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.816 [2024-11-15 10:54:40.530885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.816 [2024-11-15 10:54:40.530886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.816 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:51.816 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:12:51.816 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:51.816 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:51.816 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:51.816 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.816 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:51.816 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:51.816 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:52.073 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:52.073 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:52.073 "nvmf_tgt_1" 00:12:52.074 10:54:40 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:52.332 "nvmf_tgt_2" 00:12:52.332 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:52.332 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:52.332 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:52.332 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:52.332 true 00:12:52.590 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:52.590 true 00:12:52.590 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:52.590 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:52.590 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:52.590 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:52.590 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:52.590 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:52.590 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:52.590 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:52.590 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:52.590 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:52.590 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:52.590 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:52.590 rmmod nvme_rdma 00:12:52.849 rmmod nvme_fabrics 00:12:52.849 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:52.849 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:52.849 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:52.849 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1390776 ']' 00:12:52.849 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1390776 00:12:52.849 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 1390776 ']' 00:12:52.849 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 1390776 00:12:52.849 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:12:52.849 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:52.849 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1390776 00:12:52.849 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:52.849 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:52.849 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1390776' 00:12:52.849 killing process with pid 1390776 00:12:52.849 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 1390776 00:12:52.849 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 1390776 00:12:52.849 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:52.849 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:52.849 00:12:52.849 real 0m7.102s 00:12:52.849 user 0m7.250s 00:12:52.849 sys 0m4.515s 00:12:52.849 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:52.849 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:52.849 ************************************ 00:12:52.849 END TEST nvmf_multitarget 00:12:52.849 ************************************ 00:12:53.107 10:54:41 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:12:53.107 10:54:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:53.107 10:54:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:53.107 10:54:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:53.107 ************************************ 00:12:53.108 START TEST nvmf_rpc 00:12:53.108 ************************************ 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:12:53.108 * Looking for test storage... 00:12:53.108 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:53.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.108 --rc genhtml_branch_coverage=1 00:12:53.108 --rc genhtml_function_coverage=1 00:12:53.108 --rc genhtml_legend=1 00:12:53.108 --rc geninfo_all_blocks=1 00:12:53.108 --rc geninfo_unexecuted_blocks=1 00:12:53.108 00:12:53.108 ' 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:53.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.108 --rc genhtml_branch_coverage=1 00:12:53.108 --rc genhtml_function_coverage=1 00:12:53.108 --rc genhtml_legend=1 00:12:53.108 --rc geninfo_all_blocks=1 00:12:53.108 --rc geninfo_unexecuted_blocks=1 00:12:53.108 00:12:53.108 ' 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:53.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.108 --rc genhtml_branch_coverage=1 00:12:53.108 --rc genhtml_function_coverage=1 00:12:53.108 --rc genhtml_legend=1 00:12:53.108 --rc geninfo_all_blocks=1 00:12:53.108 --rc geninfo_unexecuted_blocks=1 00:12:53.108 00:12:53.108 ' 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:53.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.108 --rc genhtml_branch_coverage=1 00:12:53.108 --rc genhtml_function_coverage=1 00:12:53.108 --rc genhtml_legend=1 00:12:53.108 --rc geninfo_all_blocks=1 00:12:53.108 --rc geninfo_unexecuted_blocks=1 00:12:53.108 00:12:53.108 ' 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:53.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:53.108 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:53.109 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:53.109 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.109 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:53.109 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:53.109 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:53.109 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.109 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.109 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.109 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:53.109 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:53.109 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:53.109 10:54:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.370 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:58.370 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:58.370 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:58.370 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:58.370 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:58.370 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:58.370 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:58.370 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:58.370 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:58.370 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:12:58.371 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:12:58.371 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:12:58.371 Found net devices under 0000:af:00.0: mlx_0_0 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:12:58.371 Found net devices under 0000:af:00.1: mlx_0_1 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # rdma_device_init 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:58.371 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:58.372 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:58.372 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:12:58.372 altname enp175s0f0np0 00:12:58.372 altname ens801f0np0 00:12:58.372 inet 192.168.100.8/24 scope global mlx_0_0 00:12:58.372 valid_lft forever preferred_lft forever 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:58.372 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:58.372 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:12:58.372 altname enp175s0f1np1 00:12:58.372 altname ens801f1np1 00:12:58.372 inet 192.168.100.9/24 scope global mlx_0_1 00:12:58.372 valid_lft forever preferred_lft forever 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:58.372 192.168.100.9' 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:58.372 192.168.100.9' 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # head -n 1 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # tail -n +2 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # head -n 1 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:58.372 192.168.100.9' 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1393919 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1393919 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 1393919 ']' 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.372 [2024-11-15 10:54:46.739320] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:12:58.372 [2024-11-15 10:54:46.739367] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.372 [2024-11-15 10:54:46.799864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:58.372 [2024-11-15 10:54:46.842194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.372 [2024-11-15 10:54:46.842229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.372 [2024-11-15 10:54:46.842236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.372 [2024-11-15 10:54:46.842242] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.372 [2024-11-15 10:54:46.842248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.372 [2024-11-15 10:54:46.843851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.372 [2024-11-15 10:54:46.843949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.372 [2024-11-15 10:54:46.843965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:58.372 [2024-11-15 10:54:46.843970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.372 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:58.373 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:12:58.373 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:58.373 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:58.373 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.373 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:58.373 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:58.373 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.373 10:54:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.373 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.373 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:58.373 "tick_rate": 2300000000, 00:12:58.373 "poll_groups": [ 00:12:58.373 { 00:12:58.373 "name": "nvmf_tgt_poll_group_000", 00:12:58.373 "admin_qpairs": 0, 00:12:58.373 "io_qpairs": 0, 00:12:58.373 "current_admin_qpairs": 0, 00:12:58.373 "current_io_qpairs": 0, 00:12:58.373 "pending_bdev_io": 0, 00:12:58.373 "completed_nvme_io": 0, 00:12:58.373 "transports": [] 00:12:58.373 }, 00:12:58.373 { 00:12:58.373 "name": "nvmf_tgt_poll_group_001", 00:12:58.373 "admin_qpairs": 0, 00:12:58.373 "io_qpairs": 0, 00:12:58.373 "current_admin_qpairs": 0, 00:12:58.373 "current_io_qpairs": 0, 00:12:58.373 "pending_bdev_io": 0, 00:12:58.373 "completed_nvme_io": 0, 00:12:58.373 "transports": [] 00:12:58.373 }, 00:12:58.373 { 00:12:58.373 "name": "nvmf_tgt_poll_group_002", 00:12:58.373 "admin_qpairs": 0, 00:12:58.373 "io_qpairs": 0, 00:12:58.373 "current_admin_qpairs": 0, 00:12:58.373 "current_io_qpairs": 0, 00:12:58.373 "pending_bdev_io": 0, 00:12:58.373 "completed_nvme_io": 0, 00:12:58.373 "transports": [] 00:12:58.373 }, 00:12:58.373 { 00:12:58.373 "name": "nvmf_tgt_poll_group_003", 00:12:58.373 "admin_qpairs": 0, 00:12:58.373 "io_qpairs": 0, 00:12:58.373 "current_admin_qpairs": 0, 00:12:58.373 "current_io_qpairs": 0, 00:12:58.373 "pending_bdev_io": 0, 00:12:58.373 "completed_nvme_io": 0, 00:12:58.373 "transports": [] 00:12:58.373 } 00:12:58.373 ] 00:12:58.373 }' 00:12:58.373 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:58.373 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:58.373 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:58.373 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:58.373 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:58.373 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:58.373 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:58.373 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:58.373 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.373 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.373 [2024-11-15 10:54:47.117977] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1005290/0x1009780) succeed. 00:12:58.373 [2024-11-15 10:54:47.127343] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1006920/0x104ae20) succeed. 00:12:58.631 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.631 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:58.631 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.631 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.631 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.631 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:58.631 "tick_rate": 2300000000, 00:12:58.631 "poll_groups": [ 00:12:58.631 { 00:12:58.631 "name": "nvmf_tgt_poll_group_000", 00:12:58.631 "admin_qpairs": 0, 00:12:58.631 "io_qpairs": 0, 00:12:58.631 "current_admin_qpairs": 0, 00:12:58.631 "current_io_qpairs": 0, 00:12:58.631 "pending_bdev_io": 0, 00:12:58.631 "completed_nvme_io": 0, 00:12:58.631 "transports": [ 00:12:58.631 { 00:12:58.631 "trtype": "RDMA", 00:12:58.631 "pending_data_buffer": 0, 00:12:58.631 "devices": [ 00:12:58.631 { 00:12:58.631 "name": "mlx5_0", 00:12:58.631 "polls": 14874, 00:12:58.631 "idle_polls": 14874, 00:12:58.631 "completions": 0, 00:12:58.631 "requests": 0, 00:12:58.631 "request_latency": 0, 00:12:58.631 "pending_free_request": 0, 00:12:58.631 "pending_rdma_read": 0, 00:12:58.631 "pending_rdma_write": 0, 00:12:58.631 "pending_rdma_send": 0, 00:12:58.631 "total_send_wrs": 0, 00:12:58.631 "send_doorbell_updates": 0, 00:12:58.631 "total_recv_wrs": 4096, 00:12:58.631 "recv_doorbell_updates": 1 00:12:58.631 }, 00:12:58.631 { 00:12:58.631 "name": "mlx5_1", 00:12:58.631 "polls": 14874, 00:12:58.631 "idle_polls": 14874, 00:12:58.631 "completions": 0, 00:12:58.631 "requests": 0, 00:12:58.631 "request_latency": 0, 00:12:58.631 "pending_free_request": 0, 00:12:58.631 "pending_rdma_read": 0, 00:12:58.631 "pending_rdma_write": 0, 00:12:58.631 "pending_rdma_send": 0, 00:12:58.631 "total_send_wrs": 0, 00:12:58.631 "send_doorbell_updates": 0, 00:12:58.631 "total_recv_wrs": 4096, 00:12:58.631 "recv_doorbell_updates": 1 00:12:58.631 } 00:12:58.631 ] 00:12:58.631 } 00:12:58.631 ] 00:12:58.631 }, 00:12:58.631 { 00:12:58.631 "name": "nvmf_tgt_poll_group_001", 00:12:58.631 "admin_qpairs": 0, 00:12:58.631 "io_qpairs": 0, 00:12:58.631 "current_admin_qpairs": 0, 00:12:58.631 "current_io_qpairs": 0, 00:12:58.631 "pending_bdev_io": 0, 00:12:58.631 "completed_nvme_io": 0, 00:12:58.631 "transports": [ 00:12:58.631 { 00:12:58.631 "trtype": "RDMA", 00:12:58.631 "pending_data_buffer": 0, 00:12:58.631 "devices": [ 00:12:58.631 { 00:12:58.631 "name": "mlx5_0", 00:12:58.631 "polls": 9864, 00:12:58.631 "idle_polls": 9864, 00:12:58.631 "completions": 0, 00:12:58.631 "requests": 0, 00:12:58.631 "request_latency": 0, 00:12:58.631 "pending_free_request": 0, 00:12:58.631 "pending_rdma_read": 0, 00:12:58.631 "pending_rdma_write": 0, 00:12:58.631 "pending_rdma_send": 0, 00:12:58.631 "total_send_wrs": 0, 00:12:58.631 "send_doorbell_updates": 0, 00:12:58.631 "total_recv_wrs": 4096, 00:12:58.631 "recv_doorbell_updates": 1 00:12:58.631 }, 00:12:58.631 { 00:12:58.631 "name": "mlx5_1", 00:12:58.631 "polls": 9864, 00:12:58.631 "idle_polls": 9864, 00:12:58.631 "completions": 0, 00:12:58.631 "requests": 0, 00:12:58.631 "request_latency": 0, 00:12:58.631 "pending_free_request": 0, 00:12:58.631 "pending_rdma_read": 0, 00:12:58.631 "pending_rdma_write": 0, 00:12:58.631 "pending_rdma_send": 0, 00:12:58.631 "total_send_wrs": 0, 00:12:58.631 "send_doorbell_updates": 0, 00:12:58.631 "total_recv_wrs": 4096, 00:12:58.631 "recv_doorbell_updates": 1 00:12:58.631 } 00:12:58.631 ] 00:12:58.631 } 00:12:58.631 ] 00:12:58.631 }, 00:12:58.631 { 00:12:58.631 "name": "nvmf_tgt_poll_group_002", 00:12:58.631 "admin_qpairs": 0, 00:12:58.631 "io_qpairs": 0, 00:12:58.631 "current_admin_qpairs": 0, 00:12:58.631 "current_io_qpairs": 0, 00:12:58.631 "pending_bdev_io": 0, 00:12:58.631 "completed_nvme_io": 0, 00:12:58.631 "transports": [ 00:12:58.631 { 00:12:58.631 "trtype": "RDMA", 00:12:58.631 "pending_data_buffer": 0, 00:12:58.631 "devices": [ 00:12:58.631 { 00:12:58.631 "name": "mlx5_0", 00:12:58.631 "polls": 5238, 00:12:58.631 "idle_polls": 5238, 00:12:58.631 "completions": 0, 00:12:58.631 "requests": 0, 00:12:58.631 "request_latency": 0, 00:12:58.631 "pending_free_request": 0, 00:12:58.631 "pending_rdma_read": 0, 00:12:58.631 "pending_rdma_write": 0, 00:12:58.631 "pending_rdma_send": 0, 00:12:58.631 "total_send_wrs": 0, 00:12:58.631 "send_doorbell_updates": 0, 00:12:58.631 "total_recv_wrs": 4096, 00:12:58.631 "recv_doorbell_updates": 1 00:12:58.631 }, 00:12:58.631 { 00:12:58.631 "name": "mlx5_1", 00:12:58.631 "polls": 5238, 00:12:58.631 "idle_polls": 5238, 00:12:58.631 "completions": 0, 00:12:58.631 "requests": 0, 00:12:58.631 "request_latency": 0, 00:12:58.631 "pending_free_request": 0, 00:12:58.631 "pending_rdma_read": 0, 00:12:58.632 "pending_rdma_write": 0, 00:12:58.632 "pending_rdma_send": 0, 00:12:58.632 "total_send_wrs": 0, 00:12:58.632 "send_doorbell_updates": 0, 00:12:58.632 "total_recv_wrs": 4096, 00:12:58.632 "recv_doorbell_updates": 1 00:12:58.632 } 00:12:58.632 ] 00:12:58.632 } 00:12:58.632 ] 00:12:58.632 }, 00:12:58.632 { 00:12:58.632 "name": "nvmf_tgt_poll_group_003", 00:12:58.632 "admin_qpairs": 0, 00:12:58.632 "io_qpairs": 0, 00:12:58.632 "current_admin_qpairs": 0, 00:12:58.632 "current_io_qpairs": 0, 00:12:58.632 "pending_bdev_io": 0, 00:12:58.632 "completed_nvme_io": 0, 00:12:58.632 "transports": [ 00:12:58.632 { 00:12:58.632 "trtype": "RDMA", 00:12:58.632 "pending_data_buffer": 0, 00:12:58.632 "devices": [ 00:12:58.632 { 00:12:58.632 "name": "mlx5_0", 00:12:58.632 "polls": 875, 00:12:58.632 "idle_polls": 875, 00:12:58.632 "completions": 0, 00:12:58.632 "requests": 0, 00:12:58.632 "request_latency": 0, 00:12:58.632 "pending_free_request": 0, 00:12:58.632 "pending_rdma_read": 0, 00:12:58.632 "pending_rdma_write": 0, 00:12:58.632 "pending_rdma_send": 0, 00:12:58.632 "total_send_wrs": 0, 00:12:58.632 "send_doorbell_updates": 0, 00:12:58.632 "total_recv_wrs": 4096, 00:12:58.632 "recv_doorbell_updates": 1 00:12:58.632 }, 00:12:58.632 { 00:12:58.632 "name": "mlx5_1", 00:12:58.632 "polls": 875, 00:12:58.632 "idle_polls": 875, 00:12:58.632 "completions": 0, 00:12:58.632 "requests": 0, 00:12:58.632 "request_latency": 0, 00:12:58.632 "pending_free_request": 0, 00:12:58.632 "pending_rdma_read": 0, 00:12:58.632 "pending_rdma_write": 0, 00:12:58.632 "pending_rdma_send": 0, 00:12:58.632 "total_send_wrs": 0, 00:12:58.632 "send_doorbell_updates": 0, 00:12:58.632 "total_recv_wrs": 4096, 00:12:58.632 "recv_doorbell_updates": 1 00:12:58.632 } 00:12:58.632 ] 00:12:58.632 } 00:12:58.632 ] 00:12:58.632 } 00:12:58.632 ] 00:12:58.632 }' 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.632 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.889 Malloc1 00:12:58.889 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.889 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:58.889 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.889 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.889 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.889 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.889 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.889 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.889 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.889 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.890 [2024-11-15 10:54:47.562208] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:12:58.890 [2024-11-15 10:54:47.608754] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562' 00:12:58.890 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:58.890 could not add new controller: failed to write to nvme-fabrics device 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.890 10:54:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:02.173 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:02.173 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:02.173 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.173 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:02.173 10:54:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:04.073 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:04.073 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:04.073 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.073 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:04.073 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.073 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:04.073 10:54:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:06.599 [2024-11-15 10:54:55.140740] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562' 00:13:06.599 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:06.599 could not add new controller: failed to write to nvme-fabrics device 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.599 10:54:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:09.881 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:09.881 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:09.881 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:09.881 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:09.881 10:54:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:11.779 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:11.780 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:11.780 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:11.780 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:11.780 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.780 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:11.780 10:55:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.310 [2024-11-15 10:55:02.669828] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.310 10:55:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:17.589 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:17.589 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:17.589 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.589 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:17.589 10:55:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:19.490 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:19.490 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:19.491 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.491 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:19.491 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.491 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:19.491 10:55:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.391 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:21.391 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:21.391 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:21.391 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.391 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:21.391 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.391 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:21.391 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:21.391 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.391 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.391 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.391 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.391 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.391 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.391 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.391 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:21.391 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:21.391 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.391 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.649 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.649 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:21.649 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.649 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.649 [2024-11-15 10:55:10.284590] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:21.649 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.649 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:21.649 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.649 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.649 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.649 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:21.649 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.649 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.649 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.649 10:55:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:24.929 10:55:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:24.930 10:55:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:24.930 10:55:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:24.930 10:55:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:24.930 10:55:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:26.827 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:26.827 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:26.827 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:26.827 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:26.827 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:26.827 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:26.827 10:55:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:29.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.355 [2024-11-15 10:55:17.882335] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.355 10:55:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:32.633 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:32.633 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:32.633 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:32.633 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:32.633 10:55:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:34.533 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:34.533 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:34.533 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:34.533 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:34.533 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:34.533 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:34.533 10:55:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:37.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.062 [2024-11-15 10:55:25.437457] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.062 10:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:40.342 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:40.342 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:40.342 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:40.342 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:40.342 10:55:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:41.716 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:41.716 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:41.716 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:41.973 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:41.973 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:41.973 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:41.973 10:55:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.501 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:44.501 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:44.501 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:44.502 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.502 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:44.502 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.502 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:44.502 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:44.502 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.502 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.502 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.502 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.502 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.502 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.502 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.502 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:44.502 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:44.502 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.502 10:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.502 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.502 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:44.502 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.502 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.502 [2024-11-15 10:55:33.009018] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:44.502 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.502 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:44.502 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.502 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.502 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.502 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:44.502 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.502 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.502 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.502 10:55:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:47.783 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:47.783 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:47.783 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:47.783 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:47.783 10:55:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:49.685 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:49.685 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:49.685 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:49.685 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:49.685 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:49.685 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:49.685 10:55:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:52.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.217 [2024-11-15 10:55:40.583460] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.217 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 [2024-11-15 10:55:40.632103] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 [2024-11-15 10:55:40.680259] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 [2024-11-15 10:55:40.728451] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 [2024-11-15 10:55:40.776647] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:52.218 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:52.219 "tick_rate": 2300000000, 00:13:52.219 "poll_groups": [ 00:13:52.219 { 00:13:52.219 "name": "nvmf_tgt_poll_group_000", 00:13:52.219 "admin_qpairs": 2, 00:13:52.219 "io_qpairs": 27, 00:13:52.219 "current_admin_qpairs": 0, 00:13:52.219 "current_io_qpairs": 0, 00:13:52.219 "pending_bdev_io": 0, 00:13:52.219 "completed_nvme_io": 273, 00:13:52.219 "transports": [ 00:13:52.219 { 00:13:52.219 "trtype": "RDMA", 00:13:52.219 "pending_data_buffer": 0, 00:13:52.219 "devices": [ 00:13:52.219 { 00:13:52.219 "name": "mlx5_0", 00:13:52.219 "polls": 6395545, 00:13:52.219 "idle_polls": 6394977, 00:13:52.219 "completions": 669, 00:13:52.219 "requests": 334, 00:13:52.219 "request_latency": 77405070, 00:13:52.219 "pending_free_request": 0, 00:13:52.219 "pending_rdma_read": 0, 00:13:52.219 "pending_rdma_write": 0, 00:13:52.219 "pending_rdma_send": 0, 00:13:52.219 "total_send_wrs": 611, 00:13:52.219 "send_doorbell_updates": 272, 00:13:52.219 "total_recv_wrs": 4430, 00:13:52.219 "recv_doorbell_updates": 272 00:13:52.219 }, 00:13:52.219 { 00:13:52.219 "name": "mlx5_1", 00:13:52.219 "polls": 6395545, 00:13:52.219 "idle_polls": 6395545, 00:13:52.219 "completions": 0, 00:13:52.219 "requests": 0, 00:13:52.219 "request_latency": 0, 00:13:52.219 "pending_free_request": 0, 00:13:52.219 "pending_rdma_read": 0, 00:13:52.219 "pending_rdma_write": 0, 00:13:52.219 "pending_rdma_send": 0, 00:13:52.219 "total_send_wrs": 0, 00:13:52.219 "send_doorbell_updates": 0, 00:13:52.219 "total_recv_wrs": 4096, 00:13:52.219 "recv_doorbell_updates": 1 00:13:52.219 } 00:13:52.219 ] 00:13:52.219 } 00:13:52.219 ] 00:13:52.219 }, 00:13:52.219 { 00:13:52.219 "name": "nvmf_tgt_poll_group_001", 00:13:52.219 "admin_qpairs": 2, 00:13:52.219 "io_qpairs": 26, 00:13:52.219 "current_admin_qpairs": 0, 00:13:52.219 "current_io_qpairs": 0, 00:13:52.219 "pending_bdev_io": 0, 00:13:52.219 "completed_nvme_io": 77, 00:13:52.219 "transports": [ 00:13:52.219 { 00:13:52.219 "trtype": "RDMA", 00:13:52.219 "pending_data_buffer": 0, 00:13:52.219 "devices": [ 00:13:52.219 { 00:13:52.219 "name": "mlx5_0", 00:13:52.219 "polls": 6516287, 00:13:52.219 "idle_polls": 6516033, 00:13:52.219 "completions": 274, 00:13:52.219 "requests": 137, 00:13:52.219 "request_latency": 22113592, 00:13:52.219 "pending_free_request": 0, 00:13:52.219 "pending_rdma_read": 0, 00:13:52.219 "pending_rdma_write": 0, 00:13:52.219 "pending_rdma_send": 0, 00:13:52.219 "total_send_wrs": 218, 00:13:52.219 "send_doorbell_updates": 125, 00:13:52.219 "total_recv_wrs": 4233, 00:13:52.219 "recv_doorbell_updates": 126 00:13:52.219 }, 00:13:52.219 { 00:13:52.219 "name": "mlx5_1", 00:13:52.219 "polls": 6516287, 00:13:52.219 "idle_polls": 6516287, 00:13:52.219 "completions": 0, 00:13:52.219 "requests": 0, 00:13:52.219 "request_latency": 0, 00:13:52.219 "pending_free_request": 0, 00:13:52.219 "pending_rdma_read": 0, 00:13:52.219 "pending_rdma_write": 0, 00:13:52.219 "pending_rdma_send": 0, 00:13:52.219 "total_send_wrs": 0, 00:13:52.219 "send_doorbell_updates": 0, 00:13:52.219 "total_recv_wrs": 4096, 00:13:52.219 "recv_doorbell_updates": 1 00:13:52.219 } 00:13:52.219 ] 00:13:52.219 } 00:13:52.219 ] 00:13:52.219 }, 00:13:52.219 { 00:13:52.219 "name": "nvmf_tgt_poll_group_002", 00:13:52.219 "admin_qpairs": 1, 00:13:52.219 "io_qpairs": 26, 00:13:52.219 "current_admin_qpairs": 0, 00:13:52.219 "current_io_qpairs": 0, 00:13:52.219 "pending_bdev_io": 0, 00:13:52.219 "completed_nvme_io": 77, 00:13:52.219 "transports": [ 00:13:52.219 { 00:13:52.219 "trtype": "RDMA", 00:13:52.219 "pending_data_buffer": 0, 00:13:52.219 "devices": [ 00:13:52.219 { 00:13:52.219 "name": "mlx5_0", 00:13:52.219 "polls": 6536099, 00:13:52.219 "idle_polls": 6535900, 00:13:52.219 "completions": 219, 00:13:52.219 "requests": 109, 00:13:52.219 "request_latency": 19625372, 00:13:52.219 "pending_free_request": 0, 00:13:52.219 "pending_rdma_read": 0, 00:13:52.219 "pending_rdma_write": 0, 00:13:52.219 "pending_rdma_send": 0, 00:13:52.219 "total_send_wrs": 177, 00:13:52.219 "send_doorbell_updates": 99, 00:13:52.219 "total_recv_wrs": 4205, 00:13:52.219 "recv_doorbell_updates": 99 00:13:52.219 }, 00:13:52.219 { 00:13:52.219 "name": "mlx5_1", 00:13:52.219 "polls": 6536099, 00:13:52.219 "idle_polls": 6536099, 00:13:52.219 "completions": 0, 00:13:52.219 "requests": 0, 00:13:52.219 "request_latency": 0, 00:13:52.219 "pending_free_request": 0, 00:13:52.219 "pending_rdma_read": 0, 00:13:52.219 "pending_rdma_write": 0, 00:13:52.219 "pending_rdma_send": 0, 00:13:52.219 "total_send_wrs": 0, 00:13:52.219 "send_doorbell_updates": 0, 00:13:52.219 "total_recv_wrs": 4096, 00:13:52.219 "recv_doorbell_updates": 1 00:13:52.219 } 00:13:52.219 ] 00:13:52.219 } 00:13:52.219 ] 00:13:52.219 }, 00:13:52.219 { 00:13:52.219 "name": "nvmf_tgt_poll_group_003", 00:13:52.219 "admin_qpairs": 2, 00:13:52.219 "io_qpairs": 26, 00:13:52.219 "current_admin_qpairs": 0, 00:13:52.219 "current_io_qpairs": 0, 00:13:52.219 "pending_bdev_io": 0, 00:13:52.219 "completed_nvme_io": 28, 00:13:52.219 "transports": [ 00:13:52.219 { 00:13:52.219 "trtype": "RDMA", 00:13:52.219 "pending_data_buffer": 0, 00:13:52.219 "devices": [ 00:13:52.219 { 00:13:52.219 "name": "mlx5_0", 00:13:52.219 "polls": 5139678, 00:13:52.219 "idle_polls": 5139499, 00:13:52.219 "completions": 180, 00:13:52.219 "requests": 90, 00:13:52.219 "request_latency": 9200788, 00:13:52.219 "pending_free_request": 0, 00:13:52.219 "pending_rdma_read": 0, 00:13:52.219 "pending_rdma_write": 0, 00:13:52.219 "pending_rdma_send": 0, 00:13:52.219 "total_send_wrs": 123, 00:13:52.219 "send_doorbell_updates": 90, 00:13:52.219 "total_recv_wrs": 4186, 00:13:52.219 "recv_doorbell_updates": 91 00:13:52.219 }, 00:13:52.219 { 00:13:52.219 "name": "mlx5_1", 00:13:52.219 "polls": 5139678, 00:13:52.219 "idle_polls": 5139678, 00:13:52.219 "completions": 0, 00:13:52.219 "requests": 0, 00:13:52.219 "request_latency": 0, 00:13:52.219 "pending_free_request": 0, 00:13:52.219 "pending_rdma_read": 0, 00:13:52.219 "pending_rdma_write": 0, 00:13:52.219 "pending_rdma_send": 0, 00:13:52.219 "total_send_wrs": 0, 00:13:52.219 "send_doorbell_updates": 0, 00:13:52.219 "total_recv_wrs": 4096, 00:13:52.219 "recv_doorbell_updates": 1 00:13:52.219 } 00:13:52.219 ] 00:13:52.219 } 00:13:52.219 ] 00:13:52.219 } 00:13:52.219 ] 00:13:52.219 }' 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:52.219 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1342 > 0 )) 00:13:52.220 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:13:52.220 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:13:52.220 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:13:52.220 10:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:52.220 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 128344822 > 0 )) 00:13:52.220 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:52.220 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:52.220 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:52.220 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:52.220 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:52.220 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:52.220 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:52.220 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:52.220 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:52.220 rmmod nvme_rdma 00:13:52.220 rmmod nvme_fabrics 00:13:52.220 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:52.220 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:52.220 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:52.220 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1393919 ']' 00:13:52.220 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1393919 00:13:52.220 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 1393919 ']' 00:13:52.220 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 1393919 00:13:52.220 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:13:52.220 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:52.220 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1393919 00:13:52.478 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:52.478 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:52.478 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1393919' 00:13:52.478 killing process with pid 1393919 00:13:52.478 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 1393919 00:13:52.478 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 1393919 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:52.738 00:13:52.738 real 0m59.603s 00:13:52.738 user 3m39.006s 00:13:52.738 sys 0m5.818s 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.738 ************************************ 00:13:52.738 END TEST nvmf_rpc 00:13:52.738 ************************************ 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:52.738 ************************************ 00:13:52.738 START TEST nvmf_invalid 00:13:52.738 ************************************ 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:13:52.738 * Looking for test storage... 00:13:52.738 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:52.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.738 --rc genhtml_branch_coverage=1 00:13:52.738 --rc genhtml_function_coverage=1 00:13:52.738 --rc genhtml_legend=1 00:13:52.738 --rc geninfo_all_blocks=1 00:13:52.738 --rc geninfo_unexecuted_blocks=1 00:13:52.738 00:13:52.738 ' 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:52.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.738 --rc genhtml_branch_coverage=1 00:13:52.738 --rc genhtml_function_coverage=1 00:13:52.738 --rc genhtml_legend=1 00:13:52.738 --rc geninfo_all_blocks=1 00:13:52.738 --rc geninfo_unexecuted_blocks=1 00:13:52.738 00:13:52.738 ' 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:52.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.738 --rc genhtml_branch_coverage=1 00:13:52.738 --rc genhtml_function_coverage=1 00:13:52.738 --rc genhtml_legend=1 00:13:52.738 --rc geninfo_all_blocks=1 00:13:52.738 --rc geninfo_unexecuted_blocks=1 00:13:52.738 00:13:52.738 ' 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:52.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.738 --rc genhtml_branch_coverage=1 00:13:52.738 --rc genhtml_function_coverage=1 00:13:52.738 --rc genhtml_legend=1 00:13:52.738 --rc geninfo_all_blocks=1 00:13:52.738 --rc geninfo_unexecuted_blocks=1 00:13:52.738 00:13:52.738 ' 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.738 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.997 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:13:52.997 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:13:52.997 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.997 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.997 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:52.997 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.997 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:52.997 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:52.997 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.997 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.997 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.997 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.997 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:52.998 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:52.998 10:55:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:13:58.267 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:13:58.267 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:13:58.267 Found net devices under 0000:af:00.0: mlx_0_0 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:13:58.267 Found net devices under 0000:af:00.1: mlx_0_1 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # rdma_device_init 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:58.267 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:58.268 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:58.268 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:13:58.268 altname enp175s0f0np0 00:13:58.268 altname ens801f0np0 00:13:58.268 inet 192.168.100.8/24 scope global mlx_0_0 00:13:58.268 valid_lft forever preferred_lft forever 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:58.268 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:58.268 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:13:58.268 altname enp175s0f1np1 00:13:58.268 altname ens801f1np1 00:13:58.268 inet 192.168.100.9/24 scope global mlx_0_1 00:13:58.268 valid_lft forever preferred_lft forever 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:58.268 192.168.100.9' 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:58.268 192.168.100.9' 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # head -n 1 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:58.268 192.168.100.9' 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # tail -n +2 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # head -n 1 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1406340 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1406340 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 1406340 ']' 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:58.268 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.269 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:58.269 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:58.269 10:55:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:58.269 [2024-11-15 10:55:46.851096] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:13:58.269 [2024-11-15 10:55:46.851141] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.269 [2024-11-15 10:55:46.913115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:58.269 [2024-11-15 10:55:46.955531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.269 [2024-11-15 10:55:46.955565] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.269 [2024-11-15 10:55:46.955572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.269 [2024-11-15 10:55:46.955578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.269 [2024-11-15 10:55:46.955583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.269 [2024-11-15 10:55:46.957125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.269 [2024-11-15 10:55:46.957240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.269 [2024-11-15 10:55:46.957261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:58.269 [2024-11-15 10:55:46.957263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.269 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:58.269 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:13:58.269 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:58.269 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:58.269 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:58.269 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.269 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:58.269 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode29595 00:13:58.527 [2024-11-15 10:55:47.259232] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:58.527 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:58.527 { 00:13:58.527 "nqn": "nqn.2016-06.io.spdk:cnode29595", 00:13:58.527 "tgt_name": "foobar", 00:13:58.527 "method": "nvmf_create_subsystem", 00:13:58.527 "req_id": 1 00:13:58.527 } 00:13:58.527 Got JSON-RPC error response 00:13:58.527 response: 00:13:58.527 { 00:13:58.527 "code": -32603, 00:13:58.527 "message": "Unable to find target foobar" 00:13:58.527 }' 00:13:58.527 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:58.527 { 00:13:58.527 "nqn": "nqn.2016-06.io.spdk:cnode29595", 00:13:58.527 "tgt_name": "foobar", 00:13:58.527 "method": "nvmf_create_subsystem", 00:13:58.527 "req_id": 1 00:13:58.527 } 00:13:58.527 Got JSON-RPC error response 00:13:58.527 response: 00:13:58.527 { 00:13:58.527 "code": -32603, 00:13:58.527 "message": "Unable to find target foobar" 00:13:58.527 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:58.527 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:58.527 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17229 00:13:58.786 [2024-11-15 10:55:47.463932] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17229: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:58.786 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:58.786 { 00:13:58.786 "nqn": "nqn.2016-06.io.spdk:cnode17229", 00:13:58.786 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:58.786 "method": "nvmf_create_subsystem", 00:13:58.786 "req_id": 1 00:13:58.786 } 00:13:58.786 Got JSON-RPC error response 00:13:58.786 response: 00:13:58.786 { 00:13:58.786 "code": -32602, 00:13:58.786 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:58.786 }' 00:13:58.786 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:58.786 { 00:13:58.786 "nqn": "nqn.2016-06.io.spdk:cnode17229", 00:13:58.786 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:58.786 "method": "nvmf_create_subsystem", 00:13:58.786 "req_id": 1 00:13:58.786 } 00:13:58.786 Got JSON-RPC error response 00:13:58.786 response: 00:13:58.786 { 00:13:58.786 "code": -32602, 00:13:58.786 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:58.786 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:58.786 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:58.786 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2392 00:13:59.046 [2024-11-15 10:55:47.672625] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2392: invalid model number 'SPDK_Controller' 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:59.046 { 00:13:59.046 "nqn": "nqn.2016-06.io.spdk:cnode2392", 00:13:59.046 "model_number": "SPDK_Controller\u001f", 00:13:59.046 "method": "nvmf_create_subsystem", 00:13:59.046 "req_id": 1 00:13:59.046 } 00:13:59.046 Got JSON-RPC error response 00:13:59.046 response: 00:13:59.046 { 00:13:59.046 "code": -32602, 00:13:59.046 "message": "Invalid MN SPDK_Controller\u001f" 00:13:59.046 }' 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:59.046 { 00:13:59.046 "nqn": "nqn.2016-06.io.spdk:cnode2392", 00:13:59.046 "model_number": "SPDK_Controller\u001f", 00:13:59.046 "method": "nvmf_create_subsystem", 00:13:59.046 "req_id": 1 00:13:59.046 } 00:13:59.046 Got JSON-RPC error response 00:13:59.046 response: 00:13:59.046 { 00:13:59.046 "code": -32602, 00:13:59.046 "message": "Invalid MN SPDK_Controller\u001f" 00:13:59.046 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.046 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ \ == \- ]] 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '\O,Pg%;s.~l]QoG9u;?4z' 00:13:59.047 10:55:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '\O,Pg%;s.~l]QoG9u;?4z' nqn.2016-06.io.spdk:cnode6766 00:13:59.306 [2024-11-15 10:55:48.025853] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6766: invalid serial number '\O,Pg%;s.~l]QoG9u;?4z' 00:13:59.306 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:59.306 { 00:13:59.306 "nqn": "nqn.2016-06.io.spdk:cnode6766", 00:13:59.306 "serial_number": "\\O,Pg%;s.~l]QoG9u;?4z", 00:13:59.306 "method": "nvmf_create_subsystem", 00:13:59.306 "req_id": 1 00:13:59.306 } 00:13:59.306 Got JSON-RPC error response 00:13:59.306 response: 00:13:59.306 { 00:13:59.306 "code": -32602, 00:13:59.306 "message": "Invalid SN \\O,Pg%;s.~l]QoG9u;?4z" 00:13:59.306 }' 00:13:59.306 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:59.306 { 00:13:59.306 "nqn": "nqn.2016-06.io.spdk:cnode6766", 00:13:59.306 "serial_number": "\\O,Pg%;s.~l]QoG9u;?4z", 00:13:59.306 "method": "nvmf_create_subsystem", 00:13:59.306 "req_id": 1 00:13:59.306 } 00:13:59.306 Got JSON-RPC error response 00:13:59.306 response: 00:13:59.306 { 00:13:59.306 "code": -32602, 00:13:59.306 "message": "Invalid SN \\O,Pg%;s.~l]QoG9u;?4z" 00:13:59.306 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:59.306 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:59.306 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.307 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:59.567 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:59.567 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:59.567 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.567 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.567 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:59.567 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:59.567 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:59.567 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.567 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.567 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:59.567 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:59.567 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:59.567 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.567 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.567 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:59.567 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:59.567 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:59.567 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.567 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.567 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:59.567 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ y == \- ]] 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'yKK.3zm`ur,4PLqtNs%[F8Rk\`dx~wx3$h4J$uAqr' 00:13:59.568 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'yKK.3zm`ur,4PLqtNs%[F8Rk\`dx~wx3$h4J$uAqr' nqn.2016-06.io.spdk:cnode6343 00:13:59.827 [2024-11-15 10:55:48.491394] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6343: invalid model number 'yKK.3zm`ur,4PLqtNs%[F8Rk\`dx~wx3$h4J$uAqr' 00:13:59.827 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:59.827 { 00:13:59.827 "nqn": "nqn.2016-06.io.spdk:cnode6343", 00:13:59.827 "model_number": "yKK.3zm`ur,4PLqtNs%[F8Rk\\`dx~wx3$h4J$uAqr", 00:13:59.827 "method": "nvmf_create_subsystem", 00:13:59.827 "req_id": 1 00:13:59.827 } 00:13:59.827 Got JSON-RPC error response 00:13:59.827 response: 00:13:59.827 { 00:13:59.827 "code": -32602, 00:13:59.827 "message": "Invalid MN yKK.3zm`ur,4PLqtNs%[F8Rk\\`dx~wx3$h4J$uAqr" 00:13:59.827 }' 00:13:59.827 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:59.827 { 00:13:59.827 "nqn": "nqn.2016-06.io.spdk:cnode6343", 00:13:59.827 "model_number": "yKK.3zm`ur,4PLqtNs%[F8Rk\\`dx~wx3$h4J$uAqr", 00:13:59.827 "method": "nvmf_create_subsystem", 00:13:59.827 "req_id": 1 00:13:59.827 } 00:13:59.827 Got JSON-RPC error response 00:13:59.827 response: 00:13:59.827 { 00:13:59.827 "code": -32602, 00:13:59.827 "message": "Invalid MN yKK.3zm`ur,4PLqtNs%[F8Rk\\`dx~wx3$h4J$uAqr" 00:13:59.827 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:59.827 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:14:00.085 [2024-11-15 10:55:48.721064] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa5db50/0xa62040) succeed. 00:14:00.085 [2024-11-15 10:55:48.730557] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa5f1e0/0xaa36e0) succeed. 00:14:00.085 10:55:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:00.344 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:14:00.344 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:14:00.344 192.168.100.9' 00:14:00.344 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:00.344 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:14:00.344 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:14:00.602 [2024-11-15 10:55:49.274619] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:00.602 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:00.602 { 00:14:00.602 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:00.602 "listen_address": { 00:14:00.602 "trtype": "rdma", 00:14:00.602 "traddr": "192.168.100.8", 00:14:00.602 "trsvcid": "4421" 00:14:00.602 }, 00:14:00.602 "method": "nvmf_subsystem_remove_listener", 00:14:00.602 "req_id": 1 00:14:00.602 } 00:14:00.602 Got JSON-RPC error response 00:14:00.602 response: 00:14:00.602 { 00:14:00.602 "code": -32602, 00:14:00.602 "message": "Invalid parameters" 00:14:00.602 }' 00:14:00.602 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:00.602 { 00:14:00.602 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:00.602 "listen_address": { 00:14:00.602 "trtype": "rdma", 00:14:00.602 "traddr": "192.168.100.8", 00:14:00.602 "trsvcid": "4421" 00:14:00.602 }, 00:14:00.602 "method": "nvmf_subsystem_remove_listener", 00:14:00.602 "req_id": 1 00:14:00.602 } 00:14:00.602 Got JSON-RPC error response 00:14:00.602 response: 00:14:00.602 { 00:14:00.602 "code": -32602, 00:14:00.602 "message": "Invalid parameters" 00:14:00.602 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:00.602 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23380 -i 0 00:14:00.861 [2024-11-15 10:55:49.491350] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23380: invalid cntlid range [0-65519] 00:14:00.861 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:00.861 { 00:14:00.861 "nqn": "nqn.2016-06.io.spdk:cnode23380", 00:14:00.861 "min_cntlid": 0, 00:14:00.861 "method": "nvmf_create_subsystem", 00:14:00.861 "req_id": 1 00:14:00.861 } 00:14:00.861 Got JSON-RPC error response 00:14:00.861 response: 00:14:00.861 { 00:14:00.861 "code": -32602, 00:14:00.861 "message": "Invalid cntlid range [0-65519]" 00:14:00.861 }' 00:14:00.861 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:00.861 { 00:14:00.861 "nqn": "nqn.2016-06.io.spdk:cnode23380", 00:14:00.861 "min_cntlid": 0, 00:14:00.861 "method": "nvmf_create_subsystem", 00:14:00.861 "req_id": 1 00:14:00.861 } 00:14:00.861 Got JSON-RPC error response 00:14:00.861 response: 00:14:00.861 { 00:14:00.861 "code": -32602, 00:14:00.861 "message": "Invalid cntlid range [0-65519]" 00:14:00.861 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:00.861 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11594 -i 65520 00:14:00.861 [2024-11-15 10:55:49.700118] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11594: invalid cntlid range [65520-65519] 00:14:00.861 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:00.861 { 00:14:00.861 "nqn": "nqn.2016-06.io.spdk:cnode11594", 00:14:00.861 "min_cntlid": 65520, 00:14:00.861 "method": "nvmf_create_subsystem", 00:14:00.861 "req_id": 1 00:14:00.861 } 00:14:00.861 Got JSON-RPC error response 00:14:00.861 response: 00:14:00.861 { 00:14:00.861 "code": -32602, 00:14:00.861 "message": "Invalid cntlid range [65520-65519]" 00:14:00.861 }' 00:14:00.861 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:00.861 { 00:14:00.861 "nqn": "nqn.2016-06.io.spdk:cnode11594", 00:14:00.861 "min_cntlid": 65520, 00:14:00.861 "method": "nvmf_create_subsystem", 00:14:00.861 "req_id": 1 00:14:00.861 } 00:14:00.861 Got JSON-RPC error response 00:14:00.861 response: 00:14:00.861 { 00:14:00.861 "code": -32602, 00:14:00.861 "message": "Invalid cntlid range [65520-65519]" 00:14:00.861 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:00.861 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17812 -I 0 00:14:01.121 [2024-11-15 10:55:49.904847] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17812: invalid cntlid range [1-0] 00:14:01.121 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:01.121 { 00:14:01.121 "nqn": "nqn.2016-06.io.spdk:cnode17812", 00:14:01.121 "max_cntlid": 0, 00:14:01.121 "method": "nvmf_create_subsystem", 00:14:01.121 "req_id": 1 00:14:01.121 } 00:14:01.121 Got JSON-RPC error response 00:14:01.121 response: 00:14:01.121 { 00:14:01.121 "code": -32602, 00:14:01.121 "message": "Invalid cntlid range [1-0]" 00:14:01.121 }' 00:14:01.121 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:01.121 { 00:14:01.121 "nqn": "nqn.2016-06.io.spdk:cnode17812", 00:14:01.121 "max_cntlid": 0, 00:14:01.121 "method": "nvmf_create_subsystem", 00:14:01.121 "req_id": 1 00:14:01.121 } 00:14:01.121 Got JSON-RPC error response 00:14:01.121 response: 00:14:01.121 { 00:14:01.121 "code": -32602, 00:14:01.121 "message": "Invalid cntlid range [1-0]" 00:14:01.121 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:01.121 10:55:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6470 -I 65520 00:14:01.380 [2024-11-15 10:55:50.113653] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6470: invalid cntlid range [1-65520] 00:14:01.380 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:01.380 { 00:14:01.380 "nqn": "nqn.2016-06.io.spdk:cnode6470", 00:14:01.380 "max_cntlid": 65520, 00:14:01.380 "method": "nvmf_create_subsystem", 00:14:01.380 "req_id": 1 00:14:01.380 } 00:14:01.380 Got JSON-RPC error response 00:14:01.380 response: 00:14:01.380 { 00:14:01.380 "code": -32602, 00:14:01.380 "message": "Invalid cntlid range [1-65520]" 00:14:01.380 }' 00:14:01.380 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:01.380 { 00:14:01.380 "nqn": "nqn.2016-06.io.spdk:cnode6470", 00:14:01.380 "max_cntlid": 65520, 00:14:01.380 "method": "nvmf_create_subsystem", 00:14:01.380 "req_id": 1 00:14:01.380 } 00:14:01.380 Got JSON-RPC error response 00:14:01.380 response: 00:14:01.380 { 00:14:01.380 "code": -32602, 00:14:01.380 "message": "Invalid cntlid range [1-65520]" 00:14:01.380 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:01.380 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30526 -i 6 -I 5 00:14:01.638 [2024-11-15 10:55:50.318398] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30526: invalid cntlid range [6-5] 00:14:01.638 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:01.638 { 00:14:01.638 "nqn": "nqn.2016-06.io.spdk:cnode30526", 00:14:01.638 "min_cntlid": 6, 00:14:01.638 "max_cntlid": 5, 00:14:01.638 "method": "nvmf_create_subsystem", 00:14:01.638 "req_id": 1 00:14:01.638 } 00:14:01.638 Got JSON-RPC error response 00:14:01.638 response: 00:14:01.638 { 00:14:01.638 "code": -32602, 00:14:01.638 "message": "Invalid cntlid range [6-5]" 00:14:01.638 }' 00:14:01.638 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:01.638 { 00:14:01.638 "nqn": "nqn.2016-06.io.spdk:cnode30526", 00:14:01.638 "min_cntlid": 6, 00:14:01.638 "max_cntlid": 5, 00:14:01.638 "method": "nvmf_create_subsystem", 00:14:01.638 "req_id": 1 00:14:01.638 } 00:14:01.638 Got JSON-RPC error response 00:14:01.638 response: 00:14:01.638 { 00:14:01.638 "code": -32602, 00:14:01.638 "message": "Invalid cntlid range [6-5]" 00:14:01.638 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:01.638 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:01.638 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:01.638 { 00:14:01.638 "name": "foobar", 00:14:01.638 "method": "nvmf_delete_target", 00:14:01.638 "req_id": 1 00:14:01.638 } 00:14:01.638 Got JSON-RPC error response 00:14:01.638 response: 00:14:01.638 { 00:14:01.638 "code": -32602, 00:14:01.638 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:01.638 }' 00:14:01.638 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:01.638 { 00:14:01.638 "name": "foobar", 00:14:01.638 "method": "nvmf_delete_target", 00:14:01.638 "req_id": 1 00:14:01.638 } 00:14:01.638 Got JSON-RPC error response 00:14:01.638 response: 00:14:01.638 { 00:14:01.638 "code": -32602, 00:14:01.638 "message": "The specified target doesn't exist, cannot delete it." 00:14:01.638 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:01.638 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:01.638 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:01.638 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:01.638 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:14:01.638 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:14:01.638 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:14:01.638 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:14:01.638 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:01.639 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:14:01.639 rmmod nvme_rdma 00:14:01.639 rmmod nvme_fabrics 00:14:01.639 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:01.639 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:14:01.639 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:14:01.639 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1406340 ']' 00:14:01.639 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1406340 00:14:01.639 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 1406340 ']' 00:14:01.639 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 1406340 00:14:01.639 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:14:01.639 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:01.639 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1406340 00:14:01.897 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:01.897 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:01.897 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1406340' 00:14:01.897 killing process with pid 1406340 00:14:01.897 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 1406340 00:14:01.897 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 1406340 00:14:01.897 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:02.156 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:14:02.156 00:14:02.156 real 0m9.337s 00:14:02.156 user 0m19.095s 00:14:02.156 sys 0m4.792s 00:14:02.156 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:02.156 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:02.156 ************************************ 00:14:02.156 END TEST nvmf_invalid 00:14:02.156 ************************************ 00:14:02.156 10:55:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:14:02.156 10:55:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:02.156 10:55:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:02.156 10:55:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:02.156 ************************************ 00:14:02.156 START TEST nvmf_connect_stress 00:14:02.156 ************************************ 00:14:02.156 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:14:02.156 * Looking for test storage... 00:14:02.156 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:02.156 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:02.156 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:14:02.156 10:55:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:02.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.156 --rc genhtml_branch_coverage=1 00:14:02.156 --rc genhtml_function_coverage=1 00:14:02.156 --rc genhtml_legend=1 00:14:02.156 --rc geninfo_all_blocks=1 00:14:02.156 --rc geninfo_unexecuted_blocks=1 00:14:02.156 00:14:02.156 ' 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:02.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.156 --rc genhtml_branch_coverage=1 00:14:02.156 --rc genhtml_function_coverage=1 00:14:02.156 --rc genhtml_legend=1 00:14:02.156 --rc geninfo_all_blocks=1 00:14:02.156 --rc geninfo_unexecuted_blocks=1 00:14:02.156 00:14:02.156 ' 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:02.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.156 --rc genhtml_branch_coverage=1 00:14:02.156 --rc genhtml_function_coverage=1 00:14:02.156 --rc genhtml_legend=1 00:14:02.156 --rc geninfo_all_blocks=1 00:14:02.156 --rc geninfo_unexecuted_blocks=1 00:14:02.156 00:14:02.156 ' 00:14:02.156 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:02.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.156 --rc genhtml_branch_coverage=1 00:14:02.156 --rc genhtml_function_coverage=1 00:14:02.157 --rc genhtml_legend=1 00:14:02.157 --rc geninfo_all_blocks=1 00:14:02.157 --rc geninfo_unexecuted_blocks=1 00:14:02.157 00:14:02.157 ' 00:14:02.157 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.157 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:02.157 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.157 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.157 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.157 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.157 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.157 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.157 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.157 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.157 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.157 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.157 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:14:02.157 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:14:02.157 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.157 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.157 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.157 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.157 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:02.157 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:02.416 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:02.416 10:55:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:14:07.715 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:14:07.715 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:14:07.715 Found net devices under 0000:af:00.0: mlx_0_0 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:14:07.715 Found net devices under 0000:af:00.1: mlx_0_1 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:07.715 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:07.716 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:07.716 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:14:07.716 altname enp175s0f0np0 00:14:07.716 altname ens801f0np0 00:14:07.716 inet 192.168.100.8/24 scope global mlx_0_0 00:14:07.716 valid_lft forever preferred_lft forever 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:07.716 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:07.716 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:14:07.716 altname enp175s0f1np1 00:14:07.716 altname ens801f1np1 00:14:07.716 inet 192.168.100.9/24 scope global mlx_0_1 00:14:07.716 valid_lft forever preferred_lft forever 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:07.716 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:14:07.717 192.168.100.9' 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:14:07.717 192.168.100.9' 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # head -n 1 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:14:07.717 192.168.100.9' 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # tail -n +2 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # head -n 1 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1410165 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1410165 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 1410165 ']' 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.717 [2024-11-15 10:55:56.336299] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:14:07.717 [2024-11-15 10:55:56.336347] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.717 [2024-11-15 10:55:56.399437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:07.717 [2024-11-15 10:55:56.441766] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.717 [2024-11-15 10:55:56.441803] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.717 [2024-11-15 10:55:56.441810] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.717 [2024-11-15 10:55:56.441816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.717 [2024-11-15 10:55:56.441821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.717 [2024-11-15 10:55:56.443369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.717 [2024-11-15 10:55:56.443445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:07.717 [2024-11-15 10:55:56.443447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.717 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.977 [2024-11-15 10:55:56.601557] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9349e0/0x938ed0) succeed. 00:14:07.977 [2024-11-15 10:55:56.610849] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x935fd0/0x97a570) succeed. 00:14:07.977 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.977 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:07.977 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.977 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.977 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.977 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:07.977 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.977 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.977 [2024-11-15 10:55:56.723782] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:07.977 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.977 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:07.977 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.977 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.977 NULL1 00:14:07.977 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.977 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1410336 00:14:07.977 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:07.977 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:07.977 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:07.977 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:07.977 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.978 10:55:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.545 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.545 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:08.545 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.545 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.545 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.803 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.803 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:08.803 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.803 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.803 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.062 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.062 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:09.062 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.062 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.062 10:55:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.320 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.320 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:09.320 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.320 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.320 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.579 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.579 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:09.579 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.579 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.579 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.146 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.146 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:10.146 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.146 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.146 10:55:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.405 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.405 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:10.405 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.405 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.405 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.663 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.663 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:10.663 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.663 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.663 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.921 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.921 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:10.921 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.921 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.921 10:55:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.488 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.488 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:11.488 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.488 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.488 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.747 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.747 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:11.747 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.747 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.747 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.005 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.005 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:12.005 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.005 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.005 10:56:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.263 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.263 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:12.263 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.263 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.263 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.521 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.521 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:12.521 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.521 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.521 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.087 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.087 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:13.087 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.087 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.087 10:56:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.346 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.346 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:13.346 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.346 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.346 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.605 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.605 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:13.605 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.605 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.605 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.864 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.864 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:13.864 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.864 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.864 10:56:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.432 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.432 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:14.432 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.432 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.432 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.691 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.691 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:14.691 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.691 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.691 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.950 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.950 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:14.950 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.950 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.950 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.209 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.209 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:15.209 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.209 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.209 10:56:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.467 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.467 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:15.467 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.467 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.467 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.034 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.034 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:16.034 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.034 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.034 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.293 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.293 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:16.293 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.293 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.293 10:56:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.551 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.551 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:16.551 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.551 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.551 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.810 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.810 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:16.810 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.810 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.810 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.377 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.377 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:17.377 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.377 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.377 10:56:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.635 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.635 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:17.635 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.635 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.635 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.894 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.894 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:17.894 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.894 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.894 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.152 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.152 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:18.152 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.152 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.152 10:56:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.152 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:14:18.409 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.409 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1410336 00:14:18.409 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1410336) - No such process 00:14:18.409 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1410336 00:14:18.409 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:18.409 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:18.409 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:18.409 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:18.409 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:18.409 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:14:18.409 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:14:18.409 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:18.409 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:18.409 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:14:18.409 rmmod nvme_rdma 00:14:18.409 rmmod nvme_fabrics 00:14:18.689 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:18.689 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:18.689 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:18.689 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1410165 ']' 00:14:18.689 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1410165 00:14:18.689 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 1410165 ']' 00:14:18.689 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 1410165 00:14:18.689 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:14:18.689 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:18.689 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1410165 00:14:18.689 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:18.689 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:18.689 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1410165' 00:14:18.689 killing process with pid 1410165 00:14:18.689 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 1410165 00:14:18.689 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 1410165 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:14:18.999 00:14:18.999 real 0m16.744s 00:14:18.999 user 0m41.102s 00:14:18.999 sys 0m6.066s 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.999 ************************************ 00:14:18.999 END TEST nvmf_connect_stress 00:14:18.999 ************************************ 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:18.999 ************************************ 00:14:18.999 START TEST nvmf_fused_ordering 00:14:18.999 ************************************ 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:14:18.999 * Looking for test storage... 00:14:18.999 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:18.999 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:19.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.000 --rc genhtml_branch_coverage=1 00:14:19.000 --rc genhtml_function_coverage=1 00:14:19.000 --rc genhtml_legend=1 00:14:19.000 --rc geninfo_all_blocks=1 00:14:19.000 --rc geninfo_unexecuted_blocks=1 00:14:19.000 00:14:19.000 ' 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:19.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.000 --rc genhtml_branch_coverage=1 00:14:19.000 --rc genhtml_function_coverage=1 00:14:19.000 --rc genhtml_legend=1 00:14:19.000 --rc geninfo_all_blocks=1 00:14:19.000 --rc geninfo_unexecuted_blocks=1 00:14:19.000 00:14:19.000 ' 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:19.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.000 --rc genhtml_branch_coverage=1 00:14:19.000 --rc genhtml_function_coverage=1 00:14:19.000 --rc genhtml_legend=1 00:14:19.000 --rc geninfo_all_blocks=1 00:14:19.000 --rc geninfo_unexecuted_blocks=1 00:14:19.000 00:14:19.000 ' 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:19.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.000 --rc genhtml_branch_coverage=1 00:14:19.000 --rc genhtml_function_coverage=1 00:14:19.000 --rc genhtml_legend=1 00:14:19.000 --rc geninfo_all_blocks=1 00:14:19.000 --rc geninfo_unexecuted_blocks=1 00:14:19.000 00:14:19.000 ' 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:19.000 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.001 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.001 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.001 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:19.001 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:19.001 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:19.001 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:19.001 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:19.001 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:19.001 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:14:19.001 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.001 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:19.001 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:19.001 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:19.001 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.001 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:19.001 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.001 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:19.001 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:19.001 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:19.001 10:56:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:24.397 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:24.397 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:24.397 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:24.397 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:24.397 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:24.397 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:24.397 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:24.397 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:24.397 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:24.397 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:24.397 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:24.397 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:24.397 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:24.397 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:24.397 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:24.397 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:24.397 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:24.397 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:24.397 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:24.397 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:24.397 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:24.398 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:24.398 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:24.398 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:24.398 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:24.398 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:24.398 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:24.398 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:24.398 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:24.398 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:24.398 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:24.398 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:24.398 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:24.398 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:24.398 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:24.398 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:14:24.398 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:14:24.398 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:24.656 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:24.656 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:14:24.656 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:24.656 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:24.656 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:14:24.656 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:14:24.656 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:14:24.657 Found net devices under 0000:af:00.0: mlx_0_0 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:14:24.657 Found net devices under 0000:af:00.1: mlx_0_1 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # rdma_device_init 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@530 -- # allocate_nic_ips 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:24.657 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:24.657 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:14:24.657 altname enp175s0f0np0 00:14:24.657 altname ens801f0np0 00:14:24.657 inet 192.168.100.8/24 scope global mlx_0_0 00:14:24.657 valid_lft forever preferred_lft forever 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:24.657 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:24.657 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:14:24.657 altname enp175s0f1np1 00:14:24.657 altname ens801f1np1 00:14:24.657 inet 192.168.100.9/24 scope global mlx_0_1 00:14:24.657 valid_lft forever preferred_lft forever 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:24.657 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:14:24.658 192.168.100.9' 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:14:24.658 192.168.100.9' 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # head -n 1 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # head -n 1 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:14:24.658 192.168.100.9' 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # tail -n +2 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1415337 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1415337 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 1415337 ']' 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:24.658 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:24.916 [2024-11-15 10:56:13.543536] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:14:24.916 [2024-11-15 10:56:13.543583] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.916 [2024-11-15 10:56:13.605003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.916 [2024-11-15 10:56:13.643909] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.916 [2024-11-15 10:56:13.643945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.916 [2024-11-15 10:56:13.643953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:24.916 [2024-11-15 10:56:13.643959] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:24.916 [2024-11-15 10:56:13.643965] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.916 [2024-11-15 10:56:13.644528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.916 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:24.916 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:14:24.916 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:24.916 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:24.916 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:24.916 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.916 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:24.916 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.916 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:24.916 [2024-11-15 10:56:13.796105] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18ba2d0/0x18be7c0) succeed. 00:14:25.174 [2024-11-15 10:56:13.805246] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18bb780/0x18ffe60) succeed. 00:14:25.174 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.174 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:25.174 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.174 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.174 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.174 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:25.174 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.174 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.174 [2024-11-15 10:56:13.852559] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:25.174 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.174 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:25.174 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.174 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.174 NULL1 00:14:25.174 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.174 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:25.174 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.174 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.174 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.174 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:25.174 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.174 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.174 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.174 10:56:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:25.174 [2024-11-15 10:56:13.912385] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:14:25.174 [2024-11-15 10:56:13.912429] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1415363 ] 00:14:25.432 Attached to nqn.2016-06.io.spdk:cnode1 00:14:25.432 Namespace ID: 1 size: 1GB 00:14:25.432 fused_ordering(0) 00:14:25.432 fused_ordering(1) 00:14:25.432 fused_ordering(2) 00:14:25.432 fused_ordering(3) 00:14:25.432 fused_ordering(4) 00:14:25.432 fused_ordering(5) 00:14:25.432 fused_ordering(6) 00:14:25.432 fused_ordering(7) 00:14:25.432 fused_ordering(8) 00:14:25.432 fused_ordering(9) 00:14:25.432 fused_ordering(10) 00:14:25.432 fused_ordering(11) 00:14:25.432 fused_ordering(12) 00:14:25.432 fused_ordering(13) 00:14:25.432 fused_ordering(14) 00:14:25.432 fused_ordering(15) 00:14:25.432 fused_ordering(16) 00:14:25.432 fused_ordering(17) 00:14:25.432 fused_ordering(18) 00:14:25.432 fused_ordering(19) 00:14:25.432 fused_ordering(20) 00:14:25.432 fused_ordering(21) 00:14:25.432 fused_ordering(22) 00:14:25.432 fused_ordering(23) 00:14:25.432 fused_ordering(24) 00:14:25.432 fused_ordering(25) 00:14:25.432 fused_ordering(26) 00:14:25.432 fused_ordering(27) 00:14:25.432 fused_ordering(28) 00:14:25.432 fused_ordering(29) 00:14:25.432 fused_ordering(30) 00:14:25.432 fused_ordering(31) 00:14:25.432 fused_ordering(32) 00:14:25.432 fused_ordering(33) 00:14:25.432 fused_ordering(34) 00:14:25.432 fused_ordering(35) 00:14:25.432 fused_ordering(36) 00:14:25.432 fused_ordering(37) 00:14:25.432 fused_ordering(38) 00:14:25.432 fused_ordering(39) 00:14:25.432 fused_ordering(40) 00:14:25.432 fused_ordering(41) 00:14:25.432 fused_ordering(42) 00:14:25.432 fused_ordering(43) 00:14:25.432 fused_ordering(44) 00:14:25.432 fused_ordering(45) 00:14:25.432 fused_ordering(46) 00:14:25.432 fused_ordering(47) 00:14:25.432 fused_ordering(48) 00:14:25.432 fused_ordering(49) 00:14:25.432 fused_ordering(50) 00:14:25.432 fused_ordering(51) 00:14:25.432 fused_ordering(52) 00:14:25.432 fused_ordering(53) 00:14:25.432 fused_ordering(54) 00:14:25.432 fused_ordering(55) 00:14:25.432 fused_ordering(56) 00:14:25.432 fused_ordering(57) 00:14:25.432 fused_ordering(58) 00:14:25.432 fused_ordering(59) 00:14:25.432 fused_ordering(60) 00:14:25.432 fused_ordering(61) 00:14:25.432 fused_ordering(62) 00:14:25.432 fused_ordering(63) 00:14:25.432 fused_ordering(64) 00:14:25.432 fused_ordering(65) 00:14:25.432 fused_ordering(66) 00:14:25.432 fused_ordering(67) 00:14:25.432 fused_ordering(68) 00:14:25.432 fused_ordering(69) 00:14:25.432 fused_ordering(70) 00:14:25.432 fused_ordering(71) 00:14:25.432 fused_ordering(72) 00:14:25.432 fused_ordering(73) 00:14:25.432 fused_ordering(74) 00:14:25.432 fused_ordering(75) 00:14:25.432 fused_ordering(76) 00:14:25.432 fused_ordering(77) 00:14:25.432 fused_ordering(78) 00:14:25.432 fused_ordering(79) 00:14:25.432 fused_ordering(80) 00:14:25.432 fused_ordering(81) 00:14:25.432 fused_ordering(82) 00:14:25.432 fused_ordering(83) 00:14:25.432 fused_ordering(84) 00:14:25.432 fused_ordering(85) 00:14:25.432 fused_ordering(86) 00:14:25.432 fused_ordering(87) 00:14:25.432 fused_ordering(88) 00:14:25.432 fused_ordering(89) 00:14:25.432 fused_ordering(90) 00:14:25.432 fused_ordering(91) 00:14:25.432 fused_ordering(92) 00:14:25.432 fused_ordering(93) 00:14:25.432 fused_ordering(94) 00:14:25.432 fused_ordering(95) 00:14:25.432 fused_ordering(96) 00:14:25.432 fused_ordering(97) 00:14:25.432 fused_ordering(98) 00:14:25.432 fused_ordering(99) 00:14:25.432 fused_ordering(100) 00:14:25.432 fused_ordering(101) 00:14:25.432 fused_ordering(102) 00:14:25.432 fused_ordering(103) 00:14:25.432 fused_ordering(104) 00:14:25.432 fused_ordering(105) 00:14:25.432 fused_ordering(106) 00:14:25.432 fused_ordering(107) 00:14:25.432 fused_ordering(108) 00:14:25.432 fused_ordering(109) 00:14:25.432 fused_ordering(110) 00:14:25.432 fused_ordering(111) 00:14:25.432 fused_ordering(112) 00:14:25.432 fused_ordering(113) 00:14:25.432 fused_ordering(114) 00:14:25.432 fused_ordering(115) 00:14:25.432 fused_ordering(116) 00:14:25.432 fused_ordering(117) 00:14:25.432 fused_ordering(118) 00:14:25.432 fused_ordering(119) 00:14:25.432 fused_ordering(120) 00:14:25.432 fused_ordering(121) 00:14:25.432 fused_ordering(122) 00:14:25.432 fused_ordering(123) 00:14:25.432 fused_ordering(124) 00:14:25.432 fused_ordering(125) 00:14:25.432 fused_ordering(126) 00:14:25.432 fused_ordering(127) 00:14:25.432 fused_ordering(128) 00:14:25.432 fused_ordering(129) 00:14:25.432 fused_ordering(130) 00:14:25.432 fused_ordering(131) 00:14:25.432 fused_ordering(132) 00:14:25.432 fused_ordering(133) 00:14:25.432 fused_ordering(134) 00:14:25.432 fused_ordering(135) 00:14:25.432 fused_ordering(136) 00:14:25.432 fused_ordering(137) 00:14:25.432 fused_ordering(138) 00:14:25.432 fused_ordering(139) 00:14:25.432 fused_ordering(140) 00:14:25.432 fused_ordering(141) 00:14:25.432 fused_ordering(142) 00:14:25.432 fused_ordering(143) 00:14:25.432 fused_ordering(144) 00:14:25.432 fused_ordering(145) 00:14:25.432 fused_ordering(146) 00:14:25.432 fused_ordering(147) 00:14:25.432 fused_ordering(148) 00:14:25.432 fused_ordering(149) 00:14:25.432 fused_ordering(150) 00:14:25.432 fused_ordering(151) 00:14:25.432 fused_ordering(152) 00:14:25.432 fused_ordering(153) 00:14:25.432 fused_ordering(154) 00:14:25.432 fused_ordering(155) 00:14:25.432 fused_ordering(156) 00:14:25.432 fused_ordering(157) 00:14:25.432 fused_ordering(158) 00:14:25.432 fused_ordering(159) 00:14:25.432 fused_ordering(160) 00:14:25.432 fused_ordering(161) 00:14:25.432 fused_ordering(162) 00:14:25.432 fused_ordering(163) 00:14:25.432 fused_ordering(164) 00:14:25.432 fused_ordering(165) 00:14:25.432 fused_ordering(166) 00:14:25.432 fused_ordering(167) 00:14:25.432 fused_ordering(168) 00:14:25.432 fused_ordering(169) 00:14:25.432 fused_ordering(170) 00:14:25.432 fused_ordering(171) 00:14:25.432 fused_ordering(172) 00:14:25.432 fused_ordering(173) 00:14:25.432 fused_ordering(174) 00:14:25.432 fused_ordering(175) 00:14:25.432 fused_ordering(176) 00:14:25.432 fused_ordering(177) 00:14:25.432 fused_ordering(178) 00:14:25.432 fused_ordering(179) 00:14:25.432 fused_ordering(180) 00:14:25.432 fused_ordering(181) 00:14:25.433 fused_ordering(182) 00:14:25.433 fused_ordering(183) 00:14:25.433 fused_ordering(184) 00:14:25.433 fused_ordering(185) 00:14:25.433 fused_ordering(186) 00:14:25.433 fused_ordering(187) 00:14:25.433 fused_ordering(188) 00:14:25.433 fused_ordering(189) 00:14:25.433 fused_ordering(190) 00:14:25.433 fused_ordering(191) 00:14:25.433 fused_ordering(192) 00:14:25.433 fused_ordering(193) 00:14:25.433 fused_ordering(194) 00:14:25.433 fused_ordering(195) 00:14:25.433 fused_ordering(196) 00:14:25.433 fused_ordering(197) 00:14:25.433 fused_ordering(198) 00:14:25.433 fused_ordering(199) 00:14:25.433 fused_ordering(200) 00:14:25.433 fused_ordering(201) 00:14:25.433 fused_ordering(202) 00:14:25.433 fused_ordering(203) 00:14:25.433 fused_ordering(204) 00:14:25.433 fused_ordering(205) 00:14:25.433 fused_ordering(206) 00:14:25.433 fused_ordering(207) 00:14:25.433 fused_ordering(208) 00:14:25.433 fused_ordering(209) 00:14:25.433 fused_ordering(210) 00:14:25.433 fused_ordering(211) 00:14:25.433 fused_ordering(212) 00:14:25.433 fused_ordering(213) 00:14:25.433 fused_ordering(214) 00:14:25.433 fused_ordering(215) 00:14:25.433 fused_ordering(216) 00:14:25.433 fused_ordering(217) 00:14:25.433 fused_ordering(218) 00:14:25.433 fused_ordering(219) 00:14:25.433 fused_ordering(220) 00:14:25.433 fused_ordering(221) 00:14:25.433 fused_ordering(222) 00:14:25.433 fused_ordering(223) 00:14:25.433 fused_ordering(224) 00:14:25.433 fused_ordering(225) 00:14:25.433 fused_ordering(226) 00:14:25.433 fused_ordering(227) 00:14:25.433 fused_ordering(228) 00:14:25.433 fused_ordering(229) 00:14:25.433 fused_ordering(230) 00:14:25.433 fused_ordering(231) 00:14:25.433 fused_ordering(232) 00:14:25.433 fused_ordering(233) 00:14:25.433 fused_ordering(234) 00:14:25.433 fused_ordering(235) 00:14:25.433 fused_ordering(236) 00:14:25.433 fused_ordering(237) 00:14:25.433 fused_ordering(238) 00:14:25.433 fused_ordering(239) 00:14:25.433 fused_ordering(240) 00:14:25.433 fused_ordering(241) 00:14:25.433 fused_ordering(242) 00:14:25.433 fused_ordering(243) 00:14:25.433 fused_ordering(244) 00:14:25.433 fused_ordering(245) 00:14:25.433 fused_ordering(246) 00:14:25.433 fused_ordering(247) 00:14:25.433 fused_ordering(248) 00:14:25.433 fused_ordering(249) 00:14:25.433 fused_ordering(250) 00:14:25.433 fused_ordering(251) 00:14:25.433 fused_ordering(252) 00:14:25.433 fused_ordering(253) 00:14:25.433 fused_ordering(254) 00:14:25.433 fused_ordering(255) 00:14:25.433 fused_ordering(256) 00:14:25.433 fused_ordering(257) 00:14:25.433 fused_ordering(258) 00:14:25.433 fused_ordering(259) 00:14:25.433 fused_ordering(260) 00:14:25.433 fused_ordering(261) 00:14:25.433 fused_ordering(262) 00:14:25.433 fused_ordering(263) 00:14:25.433 fused_ordering(264) 00:14:25.433 fused_ordering(265) 00:14:25.433 fused_ordering(266) 00:14:25.433 fused_ordering(267) 00:14:25.433 fused_ordering(268) 00:14:25.433 fused_ordering(269) 00:14:25.433 fused_ordering(270) 00:14:25.433 fused_ordering(271) 00:14:25.433 fused_ordering(272) 00:14:25.433 fused_ordering(273) 00:14:25.433 fused_ordering(274) 00:14:25.433 fused_ordering(275) 00:14:25.433 fused_ordering(276) 00:14:25.433 fused_ordering(277) 00:14:25.433 fused_ordering(278) 00:14:25.433 fused_ordering(279) 00:14:25.433 fused_ordering(280) 00:14:25.433 fused_ordering(281) 00:14:25.433 fused_ordering(282) 00:14:25.433 fused_ordering(283) 00:14:25.433 fused_ordering(284) 00:14:25.433 fused_ordering(285) 00:14:25.433 fused_ordering(286) 00:14:25.433 fused_ordering(287) 00:14:25.433 fused_ordering(288) 00:14:25.433 fused_ordering(289) 00:14:25.433 fused_ordering(290) 00:14:25.433 fused_ordering(291) 00:14:25.433 fused_ordering(292) 00:14:25.433 fused_ordering(293) 00:14:25.433 fused_ordering(294) 00:14:25.433 fused_ordering(295) 00:14:25.433 fused_ordering(296) 00:14:25.433 fused_ordering(297) 00:14:25.433 fused_ordering(298) 00:14:25.433 fused_ordering(299) 00:14:25.433 fused_ordering(300) 00:14:25.433 fused_ordering(301) 00:14:25.433 fused_ordering(302) 00:14:25.433 fused_ordering(303) 00:14:25.433 fused_ordering(304) 00:14:25.433 fused_ordering(305) 00:14:25.433 fused_ordering(306) 00:14:25.433 fused_ordering(307) 00:14:25.433 fused_ordering(308) 00:14:25.433 fused_ordering(309) 00:14:25.433 fused_ordering(310) 00:14:25.433 fused_ordering(311) 00:14:25.433 fused_ordering(312) 00:14:25.433 fused_ordering(313) 00:14:25.433 fused_ordering(314) 00:14:25.433 fused_ordering(315) 00:14:25.433 fused_ordering(316) 00:14:25.433 fused_ordering(317) 00:14:25.433 fused_ordering(318) 00:14:25.433 fused_ordering(319) 00:14:25.433 fused_ordering(320) 00:14:25.433 fused_ordering(321) 00:14:25.433 fused_ordering(322) 00:14:25.433 fused_ordering(323) 00:14:25.433 fused_ordering(324) 00:14:25.433 fused_ordering(325) 00:14:25.433 fused_ordering(326) 00:14:25.433 fused_ordering(327) 00:14:25.433 fused_ordering(328) 00:14:25.433 fused_ordering(329) 00:14:25.433 fused_ordering(330) 00:14:25.433 fused_ordering(331) 00:14:25.433 fused_ordering(332) 00:14:25.433 fused_ordering(333) 00:14:25.433 fused_ordering(334) 00:14:25.433 fused_ordering(335) 00:14:25.433 fused_ordering(336) 00:14:25.433 fused_ordering(337) 00:14:25.433 fused_ordering(338) 00:14:25.433 fused_ordering(339) 00:14:25.433 fused_ordering(340) 00:14:25.433 fused_ordering(341) 00:14:25.433 fused_ordering(342) 00:14:25.433 fused_ordering(343) 00:14:25.433 fused_ordering(344) 00:14:25.433 fused_ordering(345) 00:14:25.433 fused_ordering(346) 00:14:25.433 fused_ordering(347) 00:14:25.433 fused_ordering(348) 00:14:25.433 fused_ordering(349) 00:14:25.433 fused_ordering(350) 00:14:25.433 fused_ordering(351) 00:14:25.433 fused_ordering(352) 00:14:25.433 fused_ordering(353) 00:14:25.433 fused_ordering(354) 00:14:25.433 fused_ordering(355) 00:14:25.433 fused_ordering(356) 00:14:25.433 fused_ordering(357) 00:14:25.433 fused_ordering(358) 00:14:25.433 fused_ordering(359) 00:14:25.433 fused_ordering(360) 00:14:25.433 fused_ordering(361) 00:14:25.433 fused_ordering(362) 00:14:25.433 fused_ordering(363) 00:14:25.433 fused_ordering(364) 00:14:25.433 fused_ordering(365) 00:14:25.433 fused_ordering(366) 00:14:25.433 fused_ordering(367) 00:14:25.433 fused_ordering(368) 00:14:25.433 fused_ordering(369) 00:14:25.433 fused_ordering(370) 00:14:25.433 fused_ordering(371) 00:14:25.433 fused_ordering(372) 00:14:25.433 fused_ordering(373) 00:14:25.433 fused_ordering(374) 00:14:25.433 fused_ordering(375) 00:14:25.433 fused_ordering(376) 00:14:25.433 fused_ordering(377) 00:14:25.433 fused_ordering(378) 00:14:25.433 fused_ordering(379) 00:14:25.433 fused_ordering(380) 00:14:25.433 fused_ordering(381) 00:14:25.433 fused_ordering(382) 00:14:25.433 fused_ordering(383) 00:14:25.433 fused_ordering(384) 00:14:25.433 fused_ordering(385) 00:14:25.433 fused_ordering(386) 00:14:25.433 fused_ordering(387) 00:14:25.433 fused_ordering(388) 00:14:25.433 fused_ordering(389) 00:14:25.433 fused_ordering(390) 00:14:25.433 fused_ordering(391) 00:14:25.433 fused_ordering(392) 00:14:25.433 fused_ordering(393) 00:14:25.433 fused_ordering(394) 00:14:25.433 fused_ordering(395) 00:14:25.433 fused_ordering(396) 00:14:25.433 fused_ordering(397) 00:14:25.433 fused_ordering(398) 00:14:25.433 fused_ordering(399) 00:14:25.433 fused_ordering(400) 00:14:25.433 fused_ordering(401) 00:14:25.433 fused_ordering(402) 00:14:25.433 fused_ordering(403) 00:14:25.433 fused_ordering(404) 00:14:25.433 fused_ordering(405) 00:14:25.433 fused_ordering(406) 00:14:25.433 fused_ordering(407) 00:14:25.433 fused_ordering(408) 00:14:25.433 fused_ordering(409) 00:14:25.433 fused_ordering(410) 00:14:25.433 fused_ordering(411) 00:14:25.433 fused_ordering(412) 00:14:25.433 fused_ordering(413) 00:14:25.433 fused_ordering(414) 00:14:25.433 fused_ordering(415) 00:14:25.433 fused_ordering(416) 00:14:25.433 fused_ordering(417) 00:14:25.433 fused_ordering(418) 00:14:25.433 fused_ordering(419) 00:14:25.433 fused_ordering(420) 00:14:25.433 fused_ordering(421) 00:14:25.433 fused_ordering(422) 00:14:25.433 fused_ordering(423) 00:14:25.433 fused_ordering(424) 00:14:25.433 fused_ordering(425) 00:14:25.433 fused_ordering(426) 00:14:25.433 fused_ordering(427) 00:14:25.433 fused_ordering(428) 00:14:25.433 fused_ordering(429) 00:14:25.433 fused_ordering(430) 00:14:25.433 fused_ordering(431) 00:14:25.433 fused_ordering(432) 00:14:25.433 fused_ordering(433) 00:14:25.433 fused_ordering(434) 00:14:25.433 fused_ordering(435) 00:14:25.433 fused_ordering(436) 00:14:25.433 fused_ordering(437) 00:14:25.433 fused_ordering(438) 00:14:25.433 fused_ordering(439) 00:14:25.433 fused_ordering(440) 00:14:25.433 fused_ordering(441) 00:14:25.433 fused_ordering(442) 00:14:25.433 fused_ordering(443) 00:14:25.433 fused_ordering(444) 00:14:25.433 fused_ordering(445) 00:14:25.433 fused_ordering(446) 00:14:25.433 fused_ordering(447) 00:14:25.433 fused_ordering(448) 00:14:25.433 fused_ordering(449) 00:14:25.433 fused_ordering(450) 00:14:25.434 fused_ordering(451) 00:14:25.434 fused_ordering(452) 00:14:25.434 fused_ordering(453) 00:14:25.434 fused_ordering(454) 00:14:25.434 fused_ordering(455) 00:14:25.434 fused_ordering(456) 00:14:25.434 fused_ordering(457) 00:14:25.434 fused_ordering(458) 00:14:25.434 fused_ordering(459) 00:14:25.434 fused_ordering(460) 00:14:25.434 fused_ordering(461) 00:14:25.434 fused_ordering(462) 00:14:25.434 fused_ordering(463) 00:14:25.434 fused_ordering(464) 00:14:25.434 fused_ordering(465) 00:14:25.434 fused_ordering(466) 00:14:25.434 fused_ordering(467) 00:14:25.434 fused_ordering(468) 00:14:25.434 fused_ordering(469) 00:14:25.434 fused_ordering(470) 00:14:25.434 fused_ordering(471) 00:14:25.434 fused_ordering(472) 00:14:25.434 fused_ordering(473) 00:14:25.434 fused_ordering(474) 00:14:25.434 fused_ordering(475) 00:14:25.434 fused_ordering(476) 00:14:25.434 fused_ordering(477) 00:14:25.434 fused_ordering(478) 00:14:25.434 fused_ordering(479) 00:14:25.434 fused_ordering(480) 00:14:25.434 fused_ordering(481) 00:14:25.434 fused_ordering(482) 00:14:25.434 fused_ordering(483) 00:14:25.434 fused_ordering(484) 00:14:25.434 fused_ordering(485) 00:14:25.434 fused_ordering(486) 00:14:25.434 fused_ordering(487) 00:14:25.434 fused_ordering(488) 00:14:25.434 fused_ordering(489) 00:14:25.434 fused_ordering(490) 00:14:25.434 fused_ordering(491) 00:14:25.434 fused_ordering(492) 00:14:25.434 fused_ordering(493) 00:14:25.434 fused_ordering(494) 00:14:25.434 fused_ordering(495) 00:14:25.434 fused_ordering(496) 00:14:25.434 fused_ordering(497) 00:14:25.434 fused_ordering(498) 00:14:25.434 fused_ordering(499) 00:14:25.434 fused_ordering(500) 00:14:25.434 fused_ordering(501) 00:14:25.434 fused_ordering(502) 00:14:25.434 fused_ordering(503) 00:14:25.434 fused_ordering(504) 00:14:25.434 fused_ordering(505) 00:14:25.434 fused_ordering(506) 00:14:25.434 fused_ordering(507) 00:14:25.434 fused_ordering(508) 00:14:25.434 fused_ordering(509) 00:14:25.434 fused_ordering(510) 00:14:25.434 fused_ordering(511) 00:14:25.434 fused_ordering(512) 00:14:25.434 fused_ordering(513) 00:14:25.434 fused_ordering(514) 00:14:25.434 fused_ordering(515) 00:14:25.434 fused_ordering(516) 00:14:25.434 fused_ordering(517) 00:14:25.434 fused_ordering(518) 00:14:25.434 fused_ordering(519) 00:14:25.434 fused_ordering(520) 00:14:25.434 fused_ordering(521) 00:14:25.434 fused_ordering(522) 00:14:25.434 fused_ordering(523) 00:14:25.434 fused_ordering(524) 00:14:25.434 fused_ordering(525) 00:14:25.434 fused_ordering(526) 00:14:25.434 fused_ordering(527) 00:14:25.434 fused_ordering(528) 00:14:25.434 fused_ordering(529) 00:14:25.434 fused_ordering(530) 00:14:25.434 fused_ordering(531) 00:14:25.434 fused_ordering(532) 00:14:25.434 fused_ordering(533) 00:14:25.434 fused_ordering(534) 00:14:25.434 fused_ordering(535) 00:14:25.434 fused_ordering(536) 00:14:25.434 fused_ordering(537) 00:14:25.434 fused_ordering(538) 00:14:25.434 fused_ordering(539) 00:14:25.434 fused_ordering(540) 00:14:25.434 fused_ordering(541) 00:14:25.434 fused_ordering(542) 00:14:25.434 fused_ordering(543) 00:14:25.434 fused_ordering(544) 00:14:25.434 fused_ordering(545) 00:14:25.434 fused_ordering(546) 00:14:25.434 fused_ordering(547) 00:14:25.434 fused_ordering(548) 00:14:25.434 fused_ordering(549) 00:14:25.434 fused_ordering(550) 00:14:25.434 fused_ordering(551) 00:14:25.434 fused_ordering(552) 00:14:25.434 fused_ordering(553) 00:14:25.434 fused_ordering(554) 00:14:25.434 fused_ordering(555) 00:14:25.434 fused_ordering(556) 00:14:25.434 fused_ordering(557) 00:14:25.434 fused_ordering(558) 00:14:25.434 fused_ordering(559) 00:14:25.434 fused_ordering(560) 00:14:25.434 fused_ordering(561) 00:14:25.434 fused_ordering(562) 00:14:25.434 fused_ordering(563) 00:14:25.434 fused_ordering(564) 00:14:25.434 fused_ordering(565) 00:14:25.434 fused_ordering(566) 00:14:25.434 fused_ordering(567) 00:14:25.434 fused_ordering(568) 00:14:25.434 fused_ordering(569) 00:14:25.434 fused_ordering(570) 00:14:25.434 fused_ordering(571) 00:14:25.434 fused_ordering(572) 00:14:25.434 fused_ordering(573) 00:14:25.434 fused_ordering(574) 00:14:25.434 fused_ordering(575) 00:14:25.434 fused_ordering(576) 00:14:25.434 fused_ordering(577) 00:14:25.434 fused_ordering(578) 00:14:25.434 fused_ordering(579) 00:14:25.434 fused_ordering(580) 00:14:25.434 fused_ordering(581) 00:14:25.434 fused_ordering(582) 00:14:25.434 fused_ordering(583) 00:14:25.434 fused_ordering(584) 00:14:25.434 fused_ordering(585) 00:14:25.434 fused_ordering(586) 00:14:25.434 fused_ordering(587) 00:14:25.434 fused_ordering(588) 00:14:25.434 fused_ordering(589) 00:14:25.434 fused_ordering(590) 00:14:25.434 fused_ordering(591) 00:14:25.434 fused_ordering(592) 00:14:25.434 fused_ordering(593) 00:14:25.434 fused_ordering(594) 00:14:25.434 fused_ordering(595) 00:14:25.434 fused_ordering(596) 00:14:25.434 fused_ordering(597) 00:14:25.434 fused_ordering(598) 00:14:25.434 fused_ordering(599) 00:14:25.434 fused_ordering(600) 00:14:25.434 fused_ordering(601) 00:14:25.434 fused_ordering(602) 00:14:25.434 fused_ordering(603) 00:14:25.434 fused_ordering(604) 00:14:25.434 fused_ordering(605) 00:14:25.434 fused_ordering(606) 00:14:25.434 fused_ordering(607) 00:14:25.434 fused_ordering(608) 00:14:25.434 fused_ordering(609) 00:14:25.434 fused_ordering(610) 00:14:25.434 fused_ordering(611) 00:14:25.434 fused_ordering(612) 00:14:25.434 fused_ordering(613) 00:14:25.434 fused_ordering(614) 00:14:25.434 fused_ordering(615) 00:14:25.692 fused_ordering(616) 00:14:25.692 fused_ordering(617) 00:14:25.692 fused_ordering(618) 00:14:25.692 fused_ordering(619) 00:14:25.692 fused_ordering(620) 00:14:25.692 fused_ordering(621) 00:14:25.692 fused_ordering(622) 00:14:25.692 fused_ordering(623) 00:14:25.692 fused_ordering(624) 00:14:25.692 fused_ordering(625) 00:14:25.692 fused_ordering(626) 00:14:25.692 fused_ordering(627) 00:14:25.692 fused_ordering(628) 00:14:25.692 fused_ordering(629) 00:14:25.692 fused_ordering(630) 00:14:25.692 fused_ordering(631) 00:14:25.692 fused_ordering(632) 00:14:25.692 fused_ordering(633) 00:14:25.692 fused_ordering(634) 00:14:25.692 fused_ordering(635) 00:14:25.692 fused_ordering(636) 00:14:25.692 fused_ordering(637) 00:14:25.692 fused_ordering(638) 00:14:25.692 fused_ordering(639) 00:14:25.692 fused_ordering(640) 00:14:25.692 fused_ordering(641) 00:14:25.692 fused_ordering(642) 00:14:25.692 fused_ordering(643) 00:14:25.692 fused_ordering(644) 00:14:25.692 fused_ordering(645) 00:14:25.692 fused_ordering(646) 00:14:25.692 fused_ordering(647) 00:14:25.692 fused_ordering(648) 00:14:25.692 fused_ordering(649) 00:14:25.692 fused_ordering(650) 00:14:25.692 fused_ordering(651) 00:14:25.692 fused_ordering(652) 00:14:25.692 fused_ordering(653) 00:14:25.692 fused_ordering(654) 00:14:25.692 fused_ordering(655) 00:14:25.692 fused_ordering(656) 00:14:25.692 fused_ordering(657) 00:14:25.692 fused_ordering(658) 00:14:25.692 fused_ordering(659) 00:14:25.692 fused_ordering(660) 00:14:25.692 fused_ordering(661) 00:14:25.692 fused_ordering(662) 00:14:25.692 fused_ordering(663) 00:14:25.692 fused_ordering(664) 00:14:25.692 fused_ordering(665) 00:14:25.692 fused_ordering(666) 00:14:25.692 fused_ordering(667) 00:14:25.692 fused_ordering(668) 00:14:25.692 fused_ordering(669) 00:14:25.692 fused_ordering(670) 00:14:25.692 fused_ordering(671) 00:14:25.692 fused_ordering(672) 00:14:25.692 fused_ordering(673) 00:14:25.692 fused_ordering(674) 00:14:25.692 fused_ordering(675) 00:14:25.692 fused_ordering(676) 00:14:25.692 fused_ordering(677) 00:14:25.692 fused_ordering(678) 00:14:25.692 fused_ordering(679) 00:14:25.692 fused_ordering(680) 00:14:25.692 fused_ordering(681) 00:14:25.692 fused_ordering(682) 00:14:25.692 fused_ordering(683) 00:14:25.692 fused_ordering(684) 00:14:25.692 fused_ordering(685) 00:14:25.692 fused_ordering(686) 00:14:25.692 fused_ordering(687) 00:14:25.692 fused_ordering(688) 00:14:25.692 fused_ordering(689) 00:14:25.692 fused_ordering(690) 00:14:25.692 fused_ordering(691) 00:14:25.692 fused_ordering(692) 00:14:25.692 fused_ordering(693) 00:14:25.692 fused_ordering(694) 00:14:25.692 fused_ordering(695) 00:14:25.692 fused_ordering(696) 00:14:25.692 fused_ordering(697) 00:14:25.692 fused_ordering(698) 00:14:25.692 fused_ordering(699) 00:14:25.692 fused_ordering(700) 00:14:25.692 fused_ordering(701) 00:14:25.692 fused_ordering(702) 00:14:25.692 fused_ordering(703) 00:14:25.692 fused_ordering(704) 00:14:25.692 fused_ordering(705) 00:14:25.692 fused_ordering(706) 00:14:25.692 fused_ordering(707) 00:14:25.692 fused_ordering(708) 00:14:25.692 fused_ordering(709) 00:14:25.692 fused_ordering(710) 00:14:25.692 fused_ordering(711) 00:14:25.692 fused_ordering(712) 00:14:25.692 fused_ordering(713) 00:14:25.692 fused_ordering(714) 00:14:25.692 fused_ordering(715) 00:14:25.692 fused_ordering(716) 00:14:25.692 fused_ordering(717) 00:14:25.692 fused_ordering(718) 00:14:25.692 fused_ordering(719) 00:14:25.692 fused_ordering(720) 00:14:25.692 fused_ordering(721) 00:14:25.692 fused_ordering(722) 00:14:25.692 fused_ordering(723) 00:14:25.692 fused_ordering(724) 00:14:25.692 fused_ordering(725) 00:14:25.692 fused_ordering(726) 00:14:25.692 fused_ordering(727) 00:14:25.692 fused_ordering(728) 00:14:25.692 fused_ordering(729) 00:14:25.692 fused_ordering(730) 00:14:25.692 fused_ordering(731) 00:14:25.692 fused_ordering(732) 00:14:25.692 fused_ordering(733) 00:14:25.692 fused_ordering(734) 00:14:25.692 fused_ordering(735) 00:14:25.692 fused_ordering(736) 00:14:25.692 fused_ordering(737) 00:14:25.692 fused_ordering(738) 00:14:25.692 fused_ordering(739) 00:14:25.692 fused_ordering(740) 00:14:25.692 fused_ordering(741) 00:14:25.692 fused_ordering(742) 00:14:25.692 fused_ordering(743) 00:14:25.692 fused_ordering(744) 00:14:25.692 fused_ordering(745) 00:14:25.692 fused_ordering(746) 00:14:25.693 fused_ordering(747) 00:14:25.693 fused_ordering(748) 00:14:25.693 fused_ordering(749) 00:14:25.693 fused_ordering(750) 00:14:25.693 fused_ordering(751) 00:14:25.693 fused_ordering(752) 00:14:25.693 fused_ordering(753) 00:14:25.693 fused_ordering(754) 00:14:25.693 fused_ordering(755) 00:14:25.693 fused_ordering(756) 00:14:25.693 fused_ordering(757) 00:14:25.693 fused_ordering(758) 00:14:25.693 fused_ordering(759) 00:14:25.693 fused_ordering(760) 00:14:25.693 fused_ordering(761) 00:14:25.693 fused_ordering(762) 00:14:25.693 fused_ordering(763) 00:14:25.693 fused_ordering(764) 00:14:25.693 fused_ordering(765) 00:14:25.693 fused_ordering(766) 00:14:25.693 fused_ordering(767) 00:14:25.693 fused_ordering(768) 00:14:25.693 fused_ordering(769) 00:14:25.693 fused_ordering(770) 00:14:25.693 fused_ordering(771) 00:14:25.693 fused_ordering(772) 00:14:25.693 fused_ordering(773) 00:14:25.693 fused_ordering(774) 00:14:25.693 fused_ordering(775) 00:14:25.693 fused_ordering(776) 00:14:25.693 fused_ordering(777) 00:14:25.693 fused_ordering(778) 00:14:25.693 fused_ordering(779) 00:14:25.693 fused_ordering(780) 00:14:25.693 fused_ordering(781) 00:14:25.693 fused_ordering(782) 00:14:25.693 fused_ordering(783) 00:14:25.693 fused_ordering(784) 00:14:25.693 fused_ordering(785) 00:14:25.693 fused_ordering(786) 00:14:25.693 fused_ordering(787) 00:14:25.693 fused_ordering(788) 00:14:25.693 fused_ordering(789) 00:14:25.693 fused_ordering(790) 00:14:25.693 fused_ordering(791) 00:14:25.693 fused_ordering(792) 00:14:25.693 fused_ordering(793) 00:14:25.693 fused_ordering(794) 00:14:25.693 fused_ordering(795) 00:14:25.693 fused_ordering(796) 00:14:25.693 fused_ordering(797) 00:14:25.693 fused_ordering(798) 00:14:25.693 fused_ordering(799) 00:14:25.693 fused_ordering(800) 00:14:25.693 fused_ordering(801) 00:14:25.693 fused_ordering(802) 00:14:25.693 fused_ordering(803) 00:14:25.693 fused_ordering(804) 00:14:25.693 fused_ordering(805) 00:14:25.693 fused_ordering(806) 00:14:25.693 fused_ordering(807) 00:14:25.693 fused_ordering(808) 00:14:25.693 fused_ordering(809) 00:14:25.693 fused_ordering(810) 00:14:25.693 fused_ordering(811) 00:14:25.693 fused_ordering(812) 00:14:25.693 fused_ordering(813) 00:14:25.693 fused_ordering(814) 00:14:25.693 fused_ordering(815) 00:14:25.693 fused_ordering(816) 00:14:25.693 fused_ordering(817) 00:14:25.693 fused_ordering(818) 00:14:25.693 fused_ordering(819) 00:14:25.693 fused_ordering(820) 00:14:25.950 fused_ordering(821) 00:14:25.950 fused_ordering(822) 00:14:25.950 fused_ordering(823) 00:14:25.950 fused_ordering(824) 00:14:25.950 fused_ordering(825) 00:14:25.950 fused_ordering(826) 00:14:25.950 fused_ordering(827) 00:14:25.951 fused_ordering(828) 00:14:25.951 fused_ordering(829) 00:14:25.951 fused_ordering(830) 00:14:25.951 fused_ordering(831) 00:14:25.951 fused_ordering(832) 00:14:25.951 fused_ordering(833) 00:14:25.951 fused_ordering(834) 00:14:25.951 fused_ordering(835) 00:14:25.951 fused_ordering(836) 00:14:25.951 fused_ordering(837) 00:14:25.951 fused_ordering(838) 00:14:25.951 fused_ordering(839) 00:14:25.951 fused_ordering(840) 00:14:25.951 fused_ordering(841) 00:14:25.951 fused_ordering(842) 00:14:25.951 fused_ordering(843) 00:14:25.951 fused_ordering(844) 00:14:25.951 fused_ordering(845) 00:14:25.951 fused_ordering(846) 00:14:25.951 fused_ordering(847) 00:14:25.951 fused_ordering(848) 00:14:25.951 fused_ordering(849) 00:14:25.951 fused_ordering(850) 00:14:25.951 fused_ordering(851) 00:14:25.951 fused_ordering(852) 00:14:25.951 fused_ordering(853) 00:14:25.951 fused_ordering(854) 00:14:25.951 fused_ordering(855) 00:14:25.951 fused_ordering(856) 00:14:25.951 fused_ordering(857) 00:14:25.951 fused_ordering(858) 00:14:25.951 fused_ordering(859) 00:14:25.951 fused_ordering(860) 00:14:25.951 fused_ordering(861) 00:14:25.951 fused_ordering(862) 00:14:25.951 fused_ordering(863) 00:14:25.951 fused_ordering(864) 00:14:25.951 fused_ordering(865) 00:14:25.951 fused_ordering(866) 00:14:25.951 fused_ordering(867) 00:14:25.951 fused_ordering(868) 00:14:25.951 fused_ordering(869) 00:14:25.951 fused_ordering(870) 00:14:25.951 fused_ordering(871) 00:14:25.951 fused_ordering(872) 00:14:25.951 fused_ordering(873) 00:14:25.951 fused_ordering(874) 00:14:25.951 fused_ordering(875) 00:14:25.951 fused_ordering(876) 00:14:25.951 fused_ordering(877) 00:14:25.951 fused_ordering(878) 00:14:25.951 fused_ordering(879) 00:14:25.951 fused_ordering(880) 00:14:25.951 fused_ordering(881) 00:14:25.951 fused_ordering(882) 00:14:25.951 fused_ordering(883) 00:14:25.951 fused_ordering(884) 00:14:25.951 fused_ordering(885) 00:14:25.951 fused_ordering(886) 00:14:25.951 fused_ordering(887) 00:14:25.951 fused_ordering(888) 00:14:25.951 fused_ordering(889) 00:14:25.951 fused_ordering(890) 00:14:25.951 fused_ordering(891) 00:14:25.951 fused_ordering(892) 00:14:25.951 fused_ordering(893) 00:14:25.951 fused_ordering(894) 00:14:25.951 fused_ordering(895) 00:14:25.951 fused_ordering(896) 00:14:25.951 fused_ordering(897) 00:14:25.951 fused_ordering(898) 00:14:25.951 fused_ordering(899) 00:14:25.951 fused_ordering(900) 00:14:25.951 fused_ordering(901) 00:14:25.951 fused_ordering(902) 00:14:25.951 fused_ordering(903) 00:14:25.951 fused_ordering(904) 00:14:25.951 fused_ordering(905) 00:14:25.951 fused_ordering(906) 00:14:25.951 fused_ordering(907) 00:14:25.951 fused_ordering(908) 00:14:25.951 fused_ordering(909) 00:14:25.951 fused_ordering(910) 00:14:25.951 fused_ordering(911) 00:14:25.951 fused_ordering(912) 00:14:25.951 fused_ordering(913) 00:14:25.951 fused_ordering(914) 00:14:25.951 fused_ordering(915) 00:14:25.951 fused_ordering(916) 00:14:25.951 fused_ordering(917) 00:14:25.951 fused_ordering(918) 00:14:25.951 fused_ordering(919) 00:14:25.951 fused_ordering(920) 00:14:25.951 fused_ordering(921) 00:14:25.951 fused_ordering(922) 00:14:25.951 fused_ordering(923) 00:14:25.951 fused_ordering(924) 00:14:25.951 fused_ordering(925) 00:14:25.951 fused_ordering(926) 00:14:25.951 fused_ordering(927) 00:14:25.951 fused_ordering(928) 00:14:25.951 fused_ordering(929) 00:14:25.951 fused_ordering(930) 00:14:25.951 fused_ordering(931) 00:14:25.951 fused_ordering(932) 00:14:25.951 fused_ordering(933) 00:14:25.951 fused_ordering(934) 00:14:25.951 fused_ordering(935) 00:14:25.951 fused_ordering(936) 00:14:25.951 fused_ordering(937) 00:14:25.951 fused_ordering(938) 00:14:25.951 fused_ordering(939) 00:14:25.951 fused_ordering(940) 00:14:25.951 fused_ordering(941) 00:14:25.951 fused_ordering(942) 00:14:25.951 fused_ordering(943) 00:14:25.951 fused_ordering(944) 00:14:25.951 fused_ordering(945) 00:14:25.951 fused_ordering(946) 00:14:25.951 fused_ordering(947) 00:14:25.951 fused_ordering(948) 00:14:25.951 fused_ordering(949) 00:14:25.951 fused_ordering(950) 00:14:25.951 fused_ordering(951) 00:14:25.951 fused_ordering(952) 00:14:25.951 fused_ordering(953) 00:14:25.951 fused_ordering(954) 00:14:25.951 fused_ordering(955) 00:14:25.951 fused_ordering(956) 00:14:25.951 fused_ordering(957) 00:14:25.951 fused_ordering(958) 00:14:25.951 fused_ordering(959) 00:14:25.951 fused_ordering(960) 00:14:25.951 fused_ordering(961) 00:14:25.951 fused_ordering(962) 00:14:25.951 fused_ordering(963) 00:14:25.951 fused_ordering(964) 00:14:25.951 fused_ordering(965) 00:14:25.951 fused_ordering(966) 00:14:25.951 fused_ordering(967) 00:14:25.951 fused_ordering(968) 00:14:25.951 fused_ordering(969) 00:14:25.951 fused_ordering(970) 00:14:25.951 fused_ordering(971) 00:14:25.951 fused_ordering(972) 00:14:25.951 fused_ordering(973) 00:14:25.951 fused_ordering(974) 00:14:25.951 fused_ordering(975) 00:14:25.951 fused_ordering(976) 00:14:25.951 fused_ordering(977) 00:14:25.951 fused_ordering(978) 00:14:25.951 fused_ordering(979) 00:14:25.951 fused_ordering(980) 00:14:25.951 fused_ordering(981) 00:14:25.951 fused_ordering(982) 00:14:25.951 fused_ordering(983) 00:14:25.951 fused_ordering(984) 00:14:25.951 fused_ordering(985) 00:14:25.951 fused_ordering(986) 00:14:25.951 fused_ordering(987) 00:14:25.951 fused_ordering(988) 00:14:25.951 fused_ordering(989) 00:14:25.951 fused_ordering(990) 00:14:25.951 fused_ordering(991) 00:14:25.951 fused_ordering(992) 00:14:25.951 fused_ordering(993) 00:14:25.951 fused_ordering(994) 00:14:25.951 fused_ordering(995) 00:14:25.951 fused_ordering(996) 00:14:25.951 fused_ordering(997) 00:14:25.951 fused_ordering(998) 00:14:25.951 fused_ordering(999) 00:14:25.951 fused_ordering(1000) 00:14:25.951 fused_ordering(1001) 00:14:25.951 fused_ordering(1002) 00:14:25.951 fused_ordering(1003) 00:14:25.951 fused_ordering(1004) 00:14:25.951 fused_ordering(1005) 00:14:25.951 fused_ordering(1006) 00:14:25.951 fused_ordering(1007) 00:14:25.951 fused_ordering(1008) 00:14:25.951 fused_ordering(1009) 00:14:25.951 fused_ordering(1010) 00:14:25.951 fused_ordering(1011) 00:14:25.951 fused_ordering(1012) 00:14:25.951 fused_ordering(1013) 00:14:25.951 fused_ordering(1014) 00:14:25.951 fused_ordering(1015) 00:14:25.951 fused_ordering(1016) 00:14:25.951 fused_ordering(1017) 00:14:25.951 fused_ordering(1018) 00:14:25.951 fused_ordering(1019) 00:14:25.951 fused_ordering(1020) 00:14:25.951 fused_ordering(1021) 00:14:25.951 fused_ordering(1022) 00:14:25.951 fused_ordering(1023) 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:14:25.951 rmmod nvme_rdma 00:14:25.951 rmmod nvme_fabrics 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1415337 ']' 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1415337 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 1415337 ']' 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 1415337 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1415337 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1415337' 00:14:25.951 killing process with pid 1415337 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 1415337 00:14:25.951 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 1415337 00:14:26.208 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:26.208 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:14:26.208 00:14:26.208 real 0m7.237s 00:14:26.208 user 0m3.818s 00:14:26.208 sys 0m4.556s 00:14:26.208 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:26.208 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:26.208 ************************************ 00:14:26.208 END TEST nvmf_fused_ordering 00:14:26.208 ************************************ 00:14:26.208 10:56:14 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:14:26.208 10:56:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:26.208 10:56:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:26.208 10:56:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:26.208 ************************************ 00:14:26.208 START TEST nvmf_ns_masking 00:14:26.208 ************************************ 00:14:26.208 10:56:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:14:26.208 * Looking for test storage... 00:14:26.208 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:26.208 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:26.208 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:14:26.208 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:26.467 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:26.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.468 --rc genhtml_branch_coverage=1 00:14:26.468 --rc genhtml_function_coverage=1 00:14:26.468 --rc genhtml_legend=1 00:14:26.468 --rc geninfo_all_blocks=1 00:14:26.468 --rc geninfo_unexecuted_blocks=1 00:14:26.468 00:14:26.468 ' 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:26.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.468 --rc genhtml_branch_coverage=1 00:14:26.468 --rc genhtml_function_coverage=1 00:14:26.468 --rc genhtml_legend=1 00:14:26.468 --rc geninfo_all_blocks=1 00:14:26.468 --rc geninfo_unexecuted_blocks=1 00:14:26.468 00:14:26.468 ' 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:26.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.468 --rc genhtml_branch_coverage=1 00:14:26.468 --rc genhtml_function_coverage=1 00:14:26.468 --rc genhtml_legend=1 00:14:26.468 --rc geninfo_all_blocks=1 00:14:26.468 --rc geninfo_unexecuted_blocks=1 00:14:26.468 00:14:26.468 ' 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:26.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.468 --rc genhtml_branch_coverage=1 00:14:26.468 --rc genhtml_function_coverage=1 00:14:26.468 --rc genhtml_legend=1 00:14:26.468 --rc geninfo_all_blocks=1 00:14:26.468 --rc geninfo_unexecuted_blocks=1 00:14:26.468 00:14:26.468 ' 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:26.468 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=88f7ebef-81d7-4aa7-b696-a8699a7c1272 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=5f249f22-8ad3-45ea-a457-d7c23640e469 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=7c27ad5b-97cd-4f1e-a64f-e725b7737e68 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:14:26.468 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.469 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:26.469 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:26.469 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:26.469 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.469 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:26.469 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.469 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:26.469 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:26.469 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:26.469 10:56:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:31.733 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:31.733 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:31.733 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:31.733 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:31.733 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:14:31.734 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:14:31.734 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:14:31.734 Found net devices under 0000:af:00.0: mlx_0_0 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:14:31.734 Found net devices under 0000:af:00.1: mlx_0_1 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # rdma_device_init 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@530 -- # allocate_nic_ips 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:31.734 10:56:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:31.734 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:31.734 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:31.734 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:31.734 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:31.734 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:31.734 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:31.734 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:31.734 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:31.735 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:14:31.735 altname enp175s0f0np0 00:14:31.735 altname ens801f0np0 00:14:31.735 inet 192.168.100.8/24 scope global mlx_0_0 00:14:31.735 valid_lft forever preferred_lft forever 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:31.735 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:31.735 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:14:31.735 altname enp175s0f1np1 00:14:31.735 altname ens801f1np1 00:14:31.735 inet 192.168.100.9/24 scope global mlx_0_1 00:14:31.735 valid_lft forever preferred_lft forever 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:14:31.735 192.168.100.9' 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:14:31.735 192.168.100.9' 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # head -n 1 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:14:31.735 192.168.100.9' 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # tail -n +2 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # head -n 1 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1418490 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1418490 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 1418490 ']' 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:31.735 [2024-11-15 10:56:20.176196] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:14:31.735 [2024-11-15 10:56:20.176247] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.735 [2024-11-15 10:56:20.239491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.735 [2024-11-15 10:56:20.280676] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.735 [2024-11-15 10:56:20.280710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.735 [2024-11-15 10:56:20.280718] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.735 [2024-11-15 10:56:20.280725] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.735 [2024-11-15 10:56:20.280732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.735 [2024-11-15 10:56:20.281336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.735 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:31.735 [2024-11-15 10:56:20.607195] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf3afd0/0xf3f4c0) succeed. 00:14:31.735 [2024-11-15 10:56:20.616290] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf3c480/0xf80b60) succeed. 00:14:31.994 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:31.994 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:31.994 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:31.994 Malloc1 00:14:32.252 10:56:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:32.252 Malloc2 00:14:32.252 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:32.510 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:32.769 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:32.769 [2024-11-15 10:56:21.600808] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:32.769 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:32.769 10:56:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7c27ad5b-97cd-4f1e-a64f-e725b7737e68 -a 192.168.100.8 -s 4420 -i 4 00:14:33.702 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:33.702 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:33.702 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:33.702 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:33.702 10:56:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:36.238 [ 0]:0x1 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4b6e381c3cb1475eb355c106aa58d481 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4b6e381c3cb1475eb355c106aa58d481 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:36.238 [ 0]:0x1 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4b6e381c3cb1475eb355c106aa58d481 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4b6e381c3cb1475eb355c106aa58d481 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:36.238 [ 1]:0x2 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ecfab2ef69be46739b1a5c88f4b4edf1 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ecfab2ef69be46739b1a5c88f4b4edf1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:36.238 10:56:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:36.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.806 10:56:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.064 10:56:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:37.323 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:37.323 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7c27ad5b-97cd-4f1e-a64f-e725b7737e68 -a 192.168.100.8 -s 4420 -i 4 00:14:38.257 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:38.257 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:38.257 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:38.257 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:14:38.257 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:14:38.257 10:56:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:40.159 10:56:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:40.159 10:56:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:40.159 10:56:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:40.159 10:56:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:40.159 10:56:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:40.159 10:56:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:40.159 10:56:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:40.159 10:56:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:40.159 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:40.159 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:40.159 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:40.159 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:40.159 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:40.159 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:40.159 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:40.159 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:40.159 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:40.159 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:40.159 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:40.159 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:40.159 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:40.159 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:40.417 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:40.418 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:40.418 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:40.418 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:40.418 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:40.418 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:40.418 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:40.418 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:40.418 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:40.418 [ 0]:0x2 00:14:40.418 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:40.418 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:40.418 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ecfab2ef69be46739b1a5c88f4b4edf1 00:14:40.418 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ecfab2ef69be46739b1a5c88f4b4edf1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:40.418 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:40.676 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:40.676 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:40.676 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:40.676 [ 0]:0x1 00:14:40.676 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:40.676 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:40.676 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4b6e381c3cb1475eb355c106aa58d481 00:14:40.676 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4b6e381c3cb1475eb355c106aa58d481 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:40.676 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:40.676 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:40.676 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:40.676 [ 1]:0x2 00:14:40.676 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:40.676 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:40.676 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ecfab2ef69be46739b1a5c88f4b4edf1 00:14:40.676 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ecfab2ef69be46739b1a5c88f4b4edf1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:40.676 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:40.935 [ 0]:0x2 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ecfab2ef69be46739b1a5c88f4b4edf1 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ecfab2ef69be46739b1a5c88f4b4edf1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:40.935 10:56:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:41.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.502 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:41.759 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:41.759 10:56:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7c27ad5b-97cd-4f1e-a64f-e725b7737e68 -a 192.168.100.8 -s 4420 -i 4 00:14:42.690 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:42.690 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:42.690 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:42.691 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:42.691 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:42.691 10:56:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:45.214 [ 0]:0x1 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4b6e381c3cb1475eb355c106aa58d481 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4b6e381c3cb1475eb355c106aa58d481 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:45.214 [ 1]:0x2 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:45.214 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ecfab2ef69be46739b1a5c88f4b4edf1 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ecfab2ef69be46739b1a5c88f4b4edf1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:45.215 [ 0]:0x2 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ecfab2ef69be46739b1a5c88f4b4edf1 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ecfab2ef69be46739b1a5c88f4b4edf1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:14:45.215 10:56:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:45.472 [2024-11-15 10:56:34.116755] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:45.472 request: 00:14:45.472 { 00:14:45.472 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.472 "nsid": 2, 00:14:45.472 "host": "nqn.2016-06.io.spdk:host1", 00:14:45.472 "method": "nvmf_ns_remove_host", 00:14:45.472 "req_id": 1 00:14:45.472 } 00:14:45.472 Got JSON-RPC error response 00:14:45.472 response: 00:14:45.472 { 00:14:45.472 "code": -32602, 00:14:45.472 "message": "Invalid parameters" 00:14:45.472 } 00:14:45.472 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:45.472 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:45.472 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:45.472 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:45.472 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:45.472 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:45.472 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:45.472 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:45.472 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.472 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:45.472 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.472 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:45.472 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:45.472 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:45.472 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:45.473 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:45.473 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:45.473 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.473 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:45.473 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:45.473 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:45.473 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:45.473 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:45.473 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:45.473 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:45.473 [ 0]:0x2 00:14:45.473 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:45.473 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:45.473 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ecfab2ef69be46739b1a5c88f4b4edf1 00:14:45.473 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ecfab2ef69be46739b1a5c88f4b4edf1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.473 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:45.473 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:46.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.036 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1421196 00:14:46.036 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:46.036 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.036 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1421196 /var/tmp/host.sock 00:14:46.036 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 1421196 ']' 00:14:46.037 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:14:46.296 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:46.296 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:46.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:46.296 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:46.296 10:56:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:46.296 [2024-11-15 10:56:34.968744] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:14:46.296 [2024-11-15 10:56:34.968793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1421196 ] 00:14:46.296 [2024-11-15 10:56:35.031928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.296 [2024-11-15 10:56:35.072605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.554 10:56:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:46.554 10:56:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:46.554 10:56:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.813 10:56:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:46.813 10:56:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 88f7ebef-81d7-4aa7-b696-a8699a7c1272 00:14:46.813 10:56:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:46.813 10:56:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 88F7EBEF81D74AA7B696A8699A7C1272 -i 00:14:47.071 10:56:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 5f249f22-8ad3-45ea-a457-d7c23640e469 00:14:47.071 10:56:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:47.071 10:56:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 5F249F228AD345EAA457D7C23640E469 -i 00:14:47.329 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:47.587 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:47.587 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:47.587 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:47.845 nvme0n1 00:14:47.845 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:47.845 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:48.103 nvme1n2 00:14:48.103 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:48.103 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:48.103 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:48.103 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:48.103 10:56:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:48.360 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:48.360 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:48.360 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:48.360 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:48.618 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 88f7ebef-81d7-4aa7-b696-a8699a7c1272 == \8\8\f\7\e\b\e\f\-\8\1\d\7\-\4\a\a\7\-\b\6\9\6\-\a\8\6\9\9\a\7\c\1\2\7\2 ]] 00:14:48.618 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:48.618 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:48.618 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:48.877 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 5f249f22-8ad3-45ea-a457-d7c23640e469 == \5\f\2\4\9\f\2\2\-\8\a\d\3\-\4\5\e\a\-\a\4\5\7\-\d\7\c\2\3\6\4\0\e\4\6\9 ]] 00:14:48.877 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.877 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:49.135 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 88f7ebef-81d7-4aa7-b696-a8699a7c1272 00:14:49.135 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:49.135 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 88F7EBEF81D74AA7B696A8699A7C1272 00:14:49.135 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:49.135 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 88F7EBEF81D74AA7B696A8699A7C1272 00:14:49.135 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:49.135 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:49.135 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:49.135 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:49.135 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:49.135 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:49.135 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:49.135 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:14:49.135 10:56:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 88F7EBEF81D74AA7B696A8699A7C1272 00:14:49.392 [2024-11-15 10:56:38.094728] bdev.c:8619:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:49.392 [2024-11-15 10:56:38.094761] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:49.392 [2024-11-15 10:56:38.094769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.392 request: 00:14:49.392 { 00:14:49.392 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:49.392 "namespace": { 00:14:49.392 "bdev_name": "invalid", 00:14:49.392 "nsid": 1, 00:14:49.392 "nguid": "88F7EBEF81D74AA7B696A8699A7C1272", 00:14:49.392 "no_auto_visible": false, 00:14:49.392 "no_metadata": false 00:14:49.392 }, 00:14:49.392 "method": "nvmf_subsystem_add_ns", 00:14:49.392 "req_id": 1 00:14:49.392 } 00:14:49.392 Got JSON-RPC error response 00:14:49.392 response: 00:14:49.392 { 00:14:49.392 "code": -32602, 00:14:49.392 "message": "Invalid parameters" 00:14:49.392 } 00:14:49.392 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:49.392 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:49.392 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:49.392 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:49.392 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 88f7ebef-81d7-4aa7-b696-a8699a7c1272 00:14:49.392 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:49.392 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 88F7EBEF81D74AA7B696A8699A7C1272 -i 00:14:49.650 10:56:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:51.546 10:56:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:51.546 10:56:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:51.546 10:56:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:51.804 10:56:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:51.804 10:56:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1421196 00:14:51.804 10:56:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 1421196 ']' 00:14:51.804 10:56:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 1421196 00:14:51.804 10:56:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:14:51.804 10:56:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:51.804 10:56:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1421196 00:14:51.804 10:56:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:51.804 10:56:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:51.804 10:56:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1421196' 00:14:51.804 killing process with pid 1421196 00:14:51.804 10:56:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 1421196 00:14:51.804 10:56:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 1421196 00:14:52.062 10:56:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:14:52.320 rmmod nvme_rdma 00:14:52.320 rmmod nvme_fabrics 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1418490 ']' 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1418490 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 1418490 ']' 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 1418490 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1418490 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1418490' 00:14:52.320 killing process with pid 1418490 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 1418490 00:14:52.320 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 1418490 00:14:52.579 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:52.579 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:14:52.579 00:14:52.579 real 0m26.440s 00:14:52.579 user 0m34.101s 00:14:52.579 sys 0m5.832s 00:14:52.579 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:52.579 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:52.579 ************************************ 00:14:52.579 END TEST nvmf_ns_masking 00:14:52.579 ************************************ 00:14:52.579 10:56:41 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:52.579 10:56:41 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:14:52.579 10:56:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:52.579 10:56:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:52.579 10:56:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:52.579 ************************************ 00:14:52.579 START TEST nvmf_nvme_cli 00:14:52.579 ************************************ 00:14:52.579 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:14:52.840 * Looking for test storage... 00:14:52.840 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:52.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.840 --rc genhtml_branch_coverage=1 00:14:52.840 --rc genhtml_function_coverage=1 00:14:52.840 --rc genhtml_legend=1 00:14:52.840 --rc geninfo_all_blocks=1 00:14:52.840 --rc geninfo_unexecuted_blocks=1 00:14:52.840 00:14:52.840 ' 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:52.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.840 --rc genhtml_branch_coverage=1 00:14:52.840 --rc genhtml_function_coverage=1 00:14:52.840 --rc genhtml_legend=1 00:14:52.840 --rc geninfo_all_blocks=1 00:14:52.840 --rc geninfo_unexecuted_blocks=1 00:14:52.840 00:14:52.840 ' 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:52.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.840 --rc genhtml_branch_coverage=1 00:14:52.840 --rc genhtml_function_coverage=1 00:14:52.840 --rc genhtml_legend=1 00:14:52.840 --rc geninfo_all_blocks=1 00:14:52.840 --rc geninfo_unexecuted_blocks=1 00:14:52.840 00:14:52.840 ' 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:52.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.840 --rc genhtml_branch_coverage=1 00:14:52.840 --rc genhtml_function_coverage=1 00:14:52.840 --rc genhtml_legend=1 00:14:52.840 --rc geninfo_all_blocks=1 00:14:52.840 --rc geninfo_unexecuted_blocks=1 00:14:52.840 00:14:52.840 ' 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:52.840 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:52.841 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:52.841 10:56:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:14:58.105 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:14:58.105 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:14:58.105 Found net devices under 0000:af:00.0: mlx_0_0 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:14:58.105 Found net devices under 0000:af:00.1: mlx_0_1 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # rdma_device_init 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:58.105 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:58.363 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:58.363 10:56:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # allocate_nic_ips 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:58.363 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:58.363 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:58.364 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:14:58.364 altname enp175s0f0np0 00:14:58.364 altname ens801f0np0 00:14:58.364 inet 192.168.100.8/24 scope global mlx_0_0 00:14:58.364 valid_lft forever preferred_lft forever 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:58.364 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:58.364 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:14:58.364 altname enp175s0f1np1 00:14:58.364 altname ens801f1np1 00:14:58.364 inet 192.168.100.9/24 scope global mlx_0_1 00:14:58.364 valid_lft forever preferred_lft forever 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:14:58.364 192.168.100.9' 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:14:58.364 192.168.100.9' 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # head -n 1 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # tail -n +2 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:14:58.364 192.168.100.9' 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # head -n 1 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1425446 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1425446 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 1425446 ']' 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:58.364 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.364 [2024-11-15 10:56:47.210682] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:14:58.364 [2024-11-15 10:56:47.210726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.621 [2024-11-15 10:56:47.274195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:58.621 [2024-11-15 10:56:47.318307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.622 [2024-11-15 10:56:47.318341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.622 [2024-11-15 10:56:47.318349] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:58.622 [2024-11-15 10:56:47.318355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:58.622 [2024-11-15 10:56:47.318360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.622 [2024-11-15 10:56:47.320020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.622 [2024-11-15 10:56:47.320117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.622 [2024-11-15 10:56:47.320209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:58.622 [2024-11-15 10:56:47.320211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.622 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:58.622 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:14:58.622 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:58.622 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:58.622 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.622 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.622 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:58.622 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.622 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.622 [2024-11-15 10:56:47.479045] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x238c230/0x2390720) succeed. 00:14:58.622 [2024-11-15 10:56:47.488290] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x238d8c0/0x23d1dc0) succeed. 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.914 Malloc0 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.914 Malloc1 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.914 [2024-11-15 10:56:47.707073] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.914 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:14:59.198 00:14:59.198 Discovery Log Number of Records 2, Generation counter 2 00:14:59.198 =====Discovery Log Entry 0====== 00:14:59.198 trtype: rdma 00:14:59.198 adrfam: ipv4 00:14:59.198 subtype: current discovery subsystem 00:14:59.198 treq: not required 00:14:59.198 portid: 0 00:14:59.198 trsvcid: 4420 00:14:59.198 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:59.198 traddr: 192.168.100.8 00:14:59.198 eflags: explicit discovery connections, duplicate discovery information 00:14:59.198 rdma_prtype: not specified 00:14:59.198 rdma_qptype: connected 00:14:59.198 rdma_cms: rdma-cm 00:14:59.198 rdma_pkey: 0x0000 00:14:59.198 =====Discovery Log Entry 1====== 00:14:59.198 trtype: rdma 00:14:59.198 adrfam: ipv4 00:14:59.198 subtype: nvme subsystem 00:14:59.198 treq: not required 00:14:59.198 portid: 0 00:14:59.198 trsvcid: 4420 00:14:59.198 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:59.198 traddr: 192.168.100.8 00:14:59.198 eflags: none 00:14:59.198 rdma_prtype: not specified 00:14:59.198 rdma_qptype: connected 00:14:59.198 rdma_cms: rdma-cm 00:14:59.198 rdma_pkey: 0x0000 00:14:59.198 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:59.198 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:59.198 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:59.198 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:59.198 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:59.198 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:59.198 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:59.198 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:59.198 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:59.198 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:59.198 10:56:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:02.482 10:56:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:02.482 10:56:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:15:02.482 10:56:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:02.482 10:56:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:15:02.482 10:56:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:15:02.482 10:56:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:04.383 /dev/nvme0n2 ]] 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:04.383 10:56:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:04.383 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:04.383 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:04.383 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:04.383 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:04.383 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:04.383 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:04.383 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:04.383 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:04.383 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:04.383 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:04.383 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:04.383 10:56:53 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:06.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:06.912 rmmod nvme_rdma 00:15:06.912 rmmod nvme_fabrics 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1425446 ']' 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1425446 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 1425446 ']' 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 1425446 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1425446 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1425446' 00:15:06.912 killing process with pid 1425446 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 1425446 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 1425446 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:15:06.912 00:15:06.912 real 0m14.292s 00:15:06.912 user 0m35.205s 00:15:06.912 sys 0m4.724s 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:06.912 ************************************ 00:15:06.912 END TEST nvmf_nvme_cli 00:15:06.912 ************************************ 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:06.912 10:56:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:07.172 ************************************ 00:15:07.172 START TEST nvmf_auth_target 00:15:07.172 ************************************ 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:15:07.172 * Looking for test storage... 00:15:07.172 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:07.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.172 --rc genhtml_branch_coverage=1 00:15:07.172 --rc genhtml_function_coverage=1 00:15:07.172 --rc genhtml_legend=1 00:15:07.172 --rc geninfo_all_blocks=1 00:15:07.172 --rc geninfo_unexecuted_blocks=1 00:15:07.172 00:15:07.172 ' 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:07.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.172 --rc genhtml_branch_coverage=1 00:15:07.172 --rc genhtml_function_coverage=1 00:15:07.172 --rc genhtml_legend=1 00:15:07.172 --rc geninfo_all_blocks=1 00:15:07.172 --rc geninfo_unexecuted_blocks=1 00:15:07.172 00:15:07.172 ' 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:07.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.172 --rc genhtml_branch_coverage=1 00:15:07.172 --rc genhtml_function_coverage=1 00:15:07.172 --rc genhtml_legend=1 00:15:07.172 --rc geninfo_all_blocks=1 00:15:07.172 --rc geninfo_unexecuted_blocks=1 00:15:07.172 00:15:07.172 ' 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:07.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.172 --rc genhtml_branch_coverage=1 00:15:07.172 --rc genhtml_function_coverage=1 00:15:07.172 --rc genhtml_legend=1 00:15:07.172 --rc geninfo_all_blocks=1 00:15:07.172 --rc geninfo_unexecuted_blocks=1 00:15:07.172 00:15:07.172 ' 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.172 10:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.172 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:07.172 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:07.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:07.173 10:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:15:12.441 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:15:12.441 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:15:12.441 Found net devices under 0000:af:00.0: mlx_0_0 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:15:12.441 Found net devices under 0000:af:00.1: mlx_0_1 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:12.441 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # rdma_device_init 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:12.701 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:12.701 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:15:12.701 altname enp175s0f0np0 00:15:12.701 altname ens801f0np0 00:15:12.701 inet 192.168.100.8/24 scope global mlx_0_0 00:15:12.701 valid_lft forever preferred_lft forever 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:12.701 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:12.701 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:15:12.701 altname enp175s0f1np1 00:15:12.701 altname ens801f1np1 00:15:12.701 inet 192.168.100.9/24 scope global mlx_0_1 00:15:12.701 valid_lft forever preferred_lft forever 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:15:12.701 192.168.100.9' 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:15:12.701 192.168.100.9' 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # head -n 1 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # head -n 1 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:15:12.701 192.168.100.9' 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # tail -n +2 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:15:12.701 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:12.702 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:15:12.702 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:15:12.702 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:15:12.702 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:12.702 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:12.702 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:12.702 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.702 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:12.702 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1430152 00:15:12.702 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1430152 00:15:12.702 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1430152 ']' 00:15:12.702 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.702 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:12.702 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.702 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:12.702 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1430186 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=eb6ad08fa8a7c356f110dfbc1e9acbc5a35ed8bebe3931e3 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.fzD 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key eb6ad08fa8a7c356f110dfbc1e9acbc5a35ed8bebe3931e3 0 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 eb6ad08fa8a7c356f110dfbc1e9acbc5a35ed8bebe3931e3 0 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=eb6ad08fa8a7c356f110dfbc1e9acbc5a35ed8bebe3931e3 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:12.960 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:13.219 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.fzD 00:15:13.219 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.fzD 00:15:13.219 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.fzD 00:15:13.219 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:13.219 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:13.219 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a06499d58e93467b9d03f0d0fdad4098605606feeda8b366d3144d3a2d857a15 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Uj3 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a06499d58e93467b9d03f0d0fdad4098605606feeda8b366d3144d3a2d857a15 3 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a06499d58e93467b9d03f0d0fdad4098605606feeda8b366d3144d3a2d857a15 3 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a06499d58e93467b9d03f0d0fdad4098605606feeda8b366d3144d3a2d857a15 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Uj3 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Uj3 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Uj3 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bc3a75619a06ec17718fa210ab1544aa 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.2Oo 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bc3a75619a06ec17718fa210ab1544aa 1 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bc3a75619a06ec17718fa210ab1544aa 1 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bc3a75619a06ec17718fa210ab1544aa 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.2Oo 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.2Oo 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.2Oo 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9069e5c2b1c26733b69271ad01a492e537028fa3b8bddc4f 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Uza 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9069e5c2b1c26733b69271ad01a492e537028fa3b8bddc4f 2 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9069e5c2b1c26733b69271ad01a492e537028fa3b8bddc4f 2 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9069e5c2b1c26733b69271ad01a492e537028fa3b8bddc4f 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:13.220 10:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Uza 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Uza 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Uza 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=dbcae2bec7498e7351b0bc8dff3a3aa155513f3ca46b746d 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.yyp 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key dbcae2bec7498e7351b0bc8dff3a3aa155513f3ca46b746d 2 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 dbcae2bec7498e7351b0bc8dff3a3aa155513f3ca46b746d 2 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=dbcae2bec7498e7351b0bc8dff3a3aa155513f3ca46b746d 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.yyp 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.yyp 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.yyp 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:13.220 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5609c2e22b5207d60f8575b09fe74554 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Cf4 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5609c2e22b5207d60f8575b09fe74554 1 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5609c2e22b5207d60f8575b09fe74554 1 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5609c2e22b5207d60f8575b09fe74554 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Cf4 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Cf4 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Cf4 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=63e4c5e7d31380b996f3e11c8af8b775b2ee4cc0fd19ef12c56bf9316165f2cb 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.kEk 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 63e4c5e7d31380b996f3e11c8af8b775b2ee4cc0fd19ef12c56bf9316165f2cb 3 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 63e4c5e7d31380b996f3e11c8af8b775b2ee4cc0fd19ef12c56bf9316165f2cb 3 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=63e4c5e7d31380b996f3e11c8af8b775b2ee4cc0fd19ef12c56bf9316165f2cb 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.kEk 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.kEk 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.kEk 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1430152 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1430152 ']' 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:13.479 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.737 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:13.737 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:13.737 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1430186 /var/tmp/host.sock 00:15:13.737 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1430186 ']' 00:15:13.737 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:15:13.737 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:13.737 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:13.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:13.737 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:13.737 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.737 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:13.737 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:13.738 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:13.738 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.738 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.995 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.995 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:13.995 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fzD 00:15:13.995 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.995 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.995 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.995 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.fzD 00:15:13.995 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.fzD 00:15:14.254 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Uj3 ]] 00:15:14.254 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Uj3 00:15:14.254 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.254 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.254 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.254 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Uj3 00:15:14.254 10:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Uj3 00:15:14.254 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:14.254 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.2Oo 00:15:14.254 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.254 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.254 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.254 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.2Oo 00:15:14.254 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.2Oo 00:15:14.511 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Uza ]] 00:15:14.511 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Uza 00:15:14.511 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.511 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.511 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.511 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Uza 00:15:14.511 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Uza 00:15:14.769 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:14.769 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.yyp 00:15:14.769 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.770 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.770 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.770 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.yyp 00:15:14.770 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.yyp 00:15:15.027 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Cf4 ]] 00:15:15.027 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Cf4 00:15:15.027 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.027 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.027 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.027 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Cf4 00:15:15.027 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Cf4 00:15:15.286 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:15.286 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.kEk 00:15:15.286 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.286 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.286 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.286 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.kEk 00:15:15.286 10:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.kEk 00:15:15.286 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:15.286 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:15.286 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:15.286 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.286 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:15.286 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:15.544 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:15.544 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.544 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:15.544 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:15.544 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:15.544 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.544 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.544 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.544 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.544 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.544 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.544 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.544 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.803 00:15:15.803 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.803 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.803 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.061 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.061 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.061 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.061 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.061 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.061 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.061 { 00:15:16.061 "cntlid": 1, 00:15:16.061 "qid": 0, 00:15:16.061 "state": "enabled", 00:15:16.061 "thread": "nvmf_tgt_poll_group_000", 00:15:16.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:15:16.061 "listen_address": { 00:15:16.061 "trtype": "RDMA", 00:15:16.061 "adrfam": "IPv4", 00:15:16.061 "traddr": "192.168.100.8", 00:15:16.061 "trsvcid": "4420" 00:15:16.061 }, 00:15:16.061 "peer_address": { 00:15:16.061 "trtype": "RDMA", 00:15:16.061 "adrfam": "IPv4", 00:15:16.061 "traddr": "192.168.100.8", 00:15:16.061 "trsvcid": "40112" 00:15:16.061 }, 00:15:16.061 "auth": { 00:15:16.061 "state": "completed", 00:15:16.061 "digest": "sha256", 00:15:16.061 "dhgroup": "null" 00:15:16.061 } 00:15:16.061 } 00:15:16.061 ]' 00:15:16.061 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.061 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:16.061 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.061 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:16.061 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.061 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.061 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.061 10:57:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.319 10:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:15:16.319 10:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:15:17.254 10:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.513 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:17.513 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.513 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.513 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.513 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.513 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:17.513 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:17.513 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:17.513 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.513 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:17.513 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:17.513 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:17.513 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.513 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.513 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.513 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.513 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.513 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.513 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.513 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.772 00:15:17.772 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.772 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.772 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.031 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.031 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.031 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.031 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.031 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.031 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.031 { 00:15:18.031 "cntlid": 3, 00:15:18.031 "qid": 0, 00:15:18.031 "state": "enabled", 00:15:18.031 "thread": "nvmf_tgt_poll_group_000", 00:15:18.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:15:18.031 "listen_address": { 00:15:18.031 "trtype": "RDMA", 00:15:18.031 "adrfam": "IPv4", 00:15:18.031 "traddr": "192.168.100.8", 00:15:18.031 "trsvcid": "4420" 00:15:18.031 }, 00:15:18.031 "peer_address": { 00:15:18.031 "trtype": "RDMA", 00:15:18.031 "adrfam": "IPv4", 00:15:18.031 "traddr": "192.168.100.8", 00:15:18.031 "trsvcid": "39856" 00:15:18.031 }, 00:15:18.031 "auth": { 00:15:18.031 "state": "completed", 00:15:18.031 "digest": "sha256", 00:15:18.031 "dhgroup": "null" 00:15:18.031 } 00:15:18.031 } 00:15:18.031 ]' 00:15:18.031 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.031 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.031 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.289 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:18.289 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.289 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.289 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.289 10:57:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.548 10:57:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:15:18.548 10:57:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:15:19.115 10:57:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.373 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:19.373 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.373 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.373 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.373 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.373 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:19.373 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:19.632 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:19.632 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.632 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:19.632 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:19.632 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:19.632 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.632 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.632 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.632 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.632 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.632 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.632 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.632 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.890 00:15:19.890 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.890 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.890 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.148 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.148 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.148 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.148 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.148 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.148 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.148 { 00:15:20.148 "cntlid": 5, 00:15:20.148 "qid": 0, 00:15:20.148 "state": "enabled", 00:15:20.148 "thread": "nvmf_tgt_poll_group_000", 00:15:20.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:15:20.148 "listen_address": { 00:15:20.148 "trtype": "RDMA", 00:15:20.148 "adrfam": "IPv4", 00:15:20.148 "traddr": "192.168.100.8", 00:15:20.148 "trsvcid": "4420" 00:15:20.148 }, 00:15:20.148 "peer_address": { 00:15:20.148 "trtype": "RDMA", 00:15:20.148 "adrfam": "IPv4", 00:15:20.148 "traddr": "192.168.100.8", 00:15:20.148 "trsvcid": "40844" 00:15:20.148 }, 00:15:20.148 "auth": { 00:15:20.148 "state": "completed", 00:15:20.148 "digest": "sha256", 00:15:20.148 "dhgroup": "null" 00:15:20.148 } 00:15:20.148 } 00:15:20.148 ]' 00:15:20.148 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.148 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.148 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.148 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:20.148 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.148 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.149 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.149 10:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.407 10:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:15:20.407 10:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:15:21.342 10:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.342 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:21.342 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.342 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.342 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.342 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.342 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:21.342 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:21.600 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:21.600 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.600 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.600 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:21.600 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:21.600 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.600 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:15:21.600 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.600 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.600 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.600 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:21.600 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:21.601 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:21.859 00:15:21.859 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.859 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.859 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.119 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.119 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.119 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.119 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.119 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.119 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.119 { 00:15:22.119 "cntlid": 7, 00:15:22.119 "qid": 0, 00:15:22.119 "state": "enabled", 00:15:22.119 "thread": "nvmf_tgt_poll_group_000", 00:15:22.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:15:22.119 "listen_address": { 00:15:22.119 "trtype": "RDMA", 00:15:22.119 "adrfam": "IPv4", 00:15:22.119 "traddr": "192.168.100.8", 00:15:22.119 "trsvcid": "4420" 00:15:22.119 }, 00:15:22.119 "peer_address": { 00:15:22.119 "trtype": "RDMA", 00:15:22.119 "adrfam": "IPv4", 00:15:22.119 "traddr": "192.168.100.8", 00:15:22.119 "trsvcid": "56149" 00:15:22.119 }, 00:15:22.119 "auth": { 00:15:22.119 "state": "completed", 00:15:22.119 "digest": "sha256", 00:15:22.119 "dhgroup": "null" 00:15:22.119 } 00:15:22.119 } 00:15:22.119 ]' 00:15:22.119 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.119 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.119 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.119 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:22.119 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.119 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.119 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.119 10:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.378 10:57:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:15:22.378 10:57:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:15:23.313 10:57:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.572 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:23.572 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.572 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.572 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.572 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:23.572 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.572 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:23.572 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:23.572 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:23.572 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.572 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.572 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:23.572 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:23.572 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.572 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.572 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.572 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.572 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.572 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.572 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.572 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.830 00:15:23.830 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.830 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.830 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.088 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.088 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.088 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.088 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.088 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.088 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.088 { 00:15:24.088 "cntlid": 9, 00:15:24.088 "qid": 0, 00:15:24.088 "state": "enabled", 00:15:24.088 "thread": "nvmf_tgt_poll_group_000", 00:15:24.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:15:24.088 "listen_address": { 00:15:24.088 "trtype": "RDMA", 00:15:24.088 "adrfam": "IPv4", 00:15:24.088 "traddr": "192.168.100.8", 00:15:24.088 "trsvcid": "4420" 00:15:24.088 }, 00:15:24.088 "peer_address": { 00:15:24.088 "trtype": "RDMA", 00:15:24.088 "adrfam": "IPv4", 00:15:24.088 "traddr": "192.168.100.8", 00:15:24.088 "trsvcid": "59214" 00:15:24.089 }, 00:15:24.089 "auth": { 00:15:24.089 "state": "completed", 00:15:24.089 "digest": "sha256", 00:15:24.089 "dhgroup": "ffdhe2048" 00:15:24.089 } 00:15:24.089 } 00:15:24.089 ]' 00:15:24.089 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.089 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.089 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.089 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:24.089 10:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.347 10:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.347 10:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.347 10:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.347 10:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:15:24.347 10:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:15:25.281 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.539 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:25.539 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.539 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.539 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.539 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.539 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:25.539 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:25.539 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:25.539 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.539 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:25.539 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:25.798 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:25.798 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.798 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.798 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.798 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.798 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.798 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.798 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.798 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.798 00:15:26.056 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.056 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.056 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.056 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.057 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.057 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.057 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.057 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.057 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.057 { 00:15:26.057 "cntlid": 11, 00:15:26.057 "qid": 0, 00:15:26.057 "state": "enabled", 00:15:26.057 "thread": "nvmf_tgt_poll_group_000", 00:15:26.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:15:26.057 "listen_address": { 00:15:26.057 "trtype": "RDMA", 00:15:26.057 "adrfam": "IPv4", 00:15:26.057 "traddr": "192.168.100.8", 00:15:26.057 "trsvcid": "4420" 00:15:26.057 }, 00:15:26.057 "peer_address": { 00:15:26.057 "trtype": "RDMA", 00:15:26.057 "adrfam": "IPv4", 00:15:26.057 "traddr": "192.168.100.8", 00:15:26.057 "trsvcid": "50532" 00:15:26.057 }, 00:15:26.057 "auth": { 00:15:26.057 "state": "completed", 00:15:26.057 "digest": "sha256", 00:15:26.057 "dhgroup": "ffdhe2048" 00:15:26.057 } 00:15:26.057 } 00:15:26.057 ]' 00:15:26.057 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.057 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.057 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.314 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:26.314 10:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.314 10:57:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.314 10:57:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.314 10:57:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.572 10:57:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:15:26.572 10:57:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:15:27.139 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.397 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:27.397 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.397 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.397 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.397 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.397 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:27.397 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:27.654 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:27.654 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.654 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:27.654 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:27.654 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:27.654 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.654 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.654 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.654 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.654 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.654 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.654 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.654 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.912 00:15:27.912 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.912 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.913 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.171 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.171 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.171 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.171 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.171 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.171 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.171 { 00:15:28.171 "cntlid": 13, 00:15:28.171 "qid": 0, 00:15:28.171 "state": "enabled", 00:15:28.171 "thread": "nvmf_tgt_poll_group_000", 00:15:28.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:15:28.171 "listen_address": { 00:15:28.171 "trtype": "RDMA", 00:15:28.171 "adrfam": "IPv4", 00:15:28.171 "traddr": "192.168.100.8", 00:15:28.171 "trsvcid": "4420" 00:15:28.171 }, 00:15:28.171 "peer_address": { 00:15:28.171 "trtype": "RDMA", 00:15:28.171 "adrfam": "IPv4", 00:15:28.171 "traddr": "192.168.100.8", 00:15:28.171 "trsvcid": "60900" 00:15:28.171 }, 00:15:28.171 "auth": { 00:15:28.171 "state": "completed", 00:15:28.171 "digest": "sha256", 00:15:28.171 "dhgroup": "ffdhe2048" 00:15:28.172 } 00:15:28.172 } 00:15:28.172 ]' 00:15:28.172 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.172 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:28.172 10:57:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.172 10:57:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:28.172 10:57:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.431 10:57:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.431 10:57:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.431 10:57:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.431 10:57:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:15:28.431 10:57:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:15:29.366 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.624 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:29.625 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.625 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.625 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.625 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.625 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:29.625 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:29.625 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:29.625 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.625 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:29.625 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:29.625 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:29.625 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.625 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:15:29.625 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.625 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.625 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.625 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:29.625 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.625 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.884 00:15:30.142 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.142 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.142 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.142 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.142 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.142 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.142 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.142 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.142 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.142 { 00:15:30.142 "cntlid": 15, 00:15:30.142 "qid": 0, 00:15:30.142 "state": "enabled", 00:15:30.142 "thread": "nvmf_tgt_poll_group_000", 00:15:30.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:15:30.142 "listen_address": { 00:15:30.142 "trtype": "RDMA", 00:15:30.142 "adrfam": "IPv4", 00:15:30.142 "traddr": "192.168.100.8", 00:15:30.142 "trsvcid": "4420" 00:15:30.142 }, 00:15:30.142 "peer_address": { 00:15:30.142 "trtype": "RDMA", 00:15:30.142 "adrfam": "IPv4", 00:15:30.142 "traddr": "192.168.100.8", 00:15:30.142 "trsvcid": "48113" 00:15:30.142 }, 00:15:30.142 "auth": { 00:15:30.142 "state": "completed", 00:15:30.142 "digest": "sha256", 00:15:30.142 "dhgroup": "ffdhe2048" 00:15:30.142 } 00:15:30.142 } 00:15:30.142 ]' 00:15:30.142 10:57:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.142 10:57:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.142 10:57:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.401 10:57:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:30.401 10:57:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.401 10:57:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.401 10:57:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.401 10:57:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.659 10:57:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:15:30.659 10:57:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:15:31.224 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.482 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:31.482 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.483 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.483 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.483 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:31.483 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.483 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:31.483 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:31.741 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:31.741 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.741 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:31.741 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:31.741 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:31.741 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.741 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.741 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.741 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.741 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.741 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.741 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.741 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.999 00:15:31.999 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.999 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.999 10:57:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.257 10:57:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.257 10:57:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.257 10:57:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.257 10:57:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.257 10:57:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.257 10:57:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.257 { 00:15:32.257 "cntlid": 17, 00:15:32.257 "qid": 0, 00:15:32.257 "state": "enabled", 00:15:32.257 "thread": "nvmf_tgt_poll_group_000", 00:15:32.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:15:32.257 "listen_address": { 00:15:32.257 "trtype": "RDMA", 00:15:32.257 "adrfam": "IPv4", 00:15:32.257 "traddr": "192.168.100.8", 00:15:32.257 "trsvcid": "4420" 00:15:32.257 }, 00:15:32.257 "peer_address": { 00:15:32.257 "trtype": "RDMA", 00:15:32.257 "adrfam": "IPv4", 00:15:32.257 "traddr": "192.168.100.8", 00:15:32.257 "trsvcid": "52170" 00:15:32.257 }, 00:15:32.257 "auth": { 00:15:32.257 "state": "completed", 00:15:32.257 "digest": "sha256", 00:15:32.257 "dhgroup": "ffdhe3072" 00:15:32.257 } 00:15:32.257 } 00:15:32.257 ]' 00:15:32.257 10:57:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.257 10:57:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.257 10:57:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.257 10:57:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:32.257 10:57:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.515 10:57:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.515 10:57:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.515 10:57:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.515 10:57:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:15:32.515 10:57:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:15:33.450 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.708 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:33.708 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.708 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.708 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.708 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.708 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:33.708 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:33.708 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:33.708 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.708 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:33.708 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:33.708 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:33.708 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.708 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.708 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.708 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.708 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.708 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.708 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.708 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.967 00:15:33.967 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.967 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.967 10:57:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.226 10:57:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.226 10:57:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.226 10:57:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.226 10:57:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.226 10:57:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.226 10:57:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.226 { 00:15:34.226 "cntlid": 19, 00:15:34.226 "qid": 0, 00:15:34.226 "state": "enabled", 00:15:34.226 "thread": "nvmf_tgt_poll_group_000", 00:15:34.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:15:34.226 "listen_address": { 00:15:34.226 "trtype": "RDMA", 00:15:34.226 "adrfam": "IPv4", 00:15:34.226 "traddr": "192.168.100.8", 00:15:34.226 "trsvcid": "4420" 00:15:34.226 }, 00:15:34.226 "peer_address": { 00:15:34.226 "trtype": "RDMA", 00:15:34.226 "adrfam": "IPv4", 00:15:34.226 "traddr": "192.168.100.8", 00:15:34.226 "trsvcid": "42310" 00:15:34.226 }, 00:15:34.226 "auth": { 00:15:34.226 "state": "completed", 00:15:34.226 "digest": "sha256", 00:15:34.226 "dhgroup": "ffdhe3072" 00:15:34.226 } 00:15:34.226 } 00:15:34.226 ]' 00:15:34.226 10:57:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.226 10:57:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:34.226 10:57:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.485 10:57:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:34.485 10:57:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.485 10:57:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.485 10:57:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.485 10:57:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.744 10:57:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:15:34.744 10:57:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:15:35.310 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.568 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:35.568 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.568 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.568 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.568 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.568 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:35.568 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:35.827 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:35.827 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.827 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:35.827 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:35.827 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:35.827 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.827 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.827 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.827 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.827 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.827 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.827 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.827 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.086 00:15:36.086 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.086 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.086 10:57:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.344 10:57:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.344 10:57:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.344 10:57:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.344 10:57:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.344 10:57:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.344 10:57:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.344 { 00:15:36.344 "cntlid": 21, 00:15:36.344 "qid": 0, 00:15:36.344 "state": "enabled", 00:15:36.344 "thread": "nvmf_tgt_poll_group_000", 00:15:36.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:15:36.344 "listen_address": { 00:15:36.344 "trtype": "RDMA", 00:15:36.345 "adrfam": "IPv4", 00:15:36.345 "traddr": "192.168.100.8", 00:15:36.345 "trsvcid": "4420" 00:15:36.345 }, 00:15:36.345 "peer_address": { 00:15:36.345 "trtype": "RDMA", 00:15:36.345 "adrfam": "IPv4", 00:15:36.345 "traddr": "192.168.100.8", 00:15:36.345 "trsvcid": "49385" 00:15:36.345 }, 00:15:36.345 "auth": { 00:15:36.345 "state": "completed", 00:15:36.345 "digest": "sha256", 00:15:36.345 "dhgroup": "ffdhe3072" 00:15:36.345 } 00:15:36.345 } 00:15:36.345 ]' 00:15:36.345 10:57:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.345 10:57:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:36.345 10:57:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.345 10:57:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:36.345 10:57:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.345 10:57:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.345 10:57:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.345 10:57:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.603 10:57:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:15:36.603 10:57:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:15:37.537 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.537 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:37.537 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.537 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.796 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.796 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.796 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:37.796 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:37.796 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:37.796 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.796 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:37.796 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:37.796 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:37.796 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.796 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:15:37.796 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.796 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.796 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.796 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:37.796 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:37.796 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.054 00:15:38.054 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.054 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.054 10:57:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.312 10:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.312 10:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.312 10:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.312 10:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.312 10:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.312 10:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.312 { 00:15:38.312 "cntlid": 23, 00:15:38.312 "qid": 0, 00:15:38.312 "state": "enabled", 00:15:38.312 "thread": "nvmf_tgt_poll_group_000", 00:15:38.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:15:38.312 "listen_address": { 00:15:38.312 "trtype": "RDMA", 00:15:38.312 "adrfam": "IPv4", 00:15:38.312 "traddr": "192.168.100.8", 00:15:38.312 "trsvcid": "4420" 00:15:38.312 }, 00:15:38.312 "peer_address": { 00:15:38.312 "trtype": "RDMA", 00:15:38.312 "adrfam": "IPv4", 00:15:38.313 "traddr": "192.168.100.8", 00:15:38.313 "trsvcid": "55402" 00:15:38.313 }, 00:15:38.313 "auth": { 00:15:38.313 "state": "completed", 00:15:38.313 "digest": "sha256", 00:15:38.313 "dhgroup": "ffdhe3072" 00:15:38.313 } 00:15:38.313 } 00:15:38.313 ]' 00:15:38.313 10:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.313 10:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:38.313 10:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.313 10:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:38.313 10:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.571 10:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.571 10:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.571 10:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.571 10:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:15:38.571 10:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:15:39.504 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.763 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:39.763 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.763 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.763 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.763 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:39.763 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.763 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:39.763 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:39.763 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:39.763 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.763 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:39.763 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:39.763 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:39.763 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.763 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.763 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.763 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.763 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.763 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.763 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.022 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.281 00:15:40.281 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.281 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.281 10:57:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.281 10:57:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.281 10:57:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.281 10:57:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.281 10:57:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.281 10:57:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.281 10:57:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.281 { 00:15:40.281 "cntlid": 25, 00:15:40.281 "qid": 0, 00:15:40.281 "state": "enabled", 00:15:40.281 "thread": "nvmf_tgt_poll_group_000", 00:15:40.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:15:40.281 "listen_address": { 00:15:40.281 "trtype": "RDMA", 00:15:40.281 "adrfam": "IPv4", 00:15:40.281 "traddr": "192.168.100.8", 00:15:40.281 "trsvcid": "4420" 00:15:40.281 }, 00:15:40.281 "peer_address": { 00:15:40.281 "trtype": "RDMA", 00:15:40.281 "adrfam": "IPv4", 00:15:40.281 "traddr": "192.168.100.8", 00:15:40.281 "trsvcid": "56529" 00:15:40.281 }, 00:15:40.281 "auth": { 00:15:40.281 "state": "completed", 00:15:40.281 "digest": "sha256", 00:15:40.281 "dhgroup": "ffdhe4096" 00:15:40.281 } 00:15:40.281 } 00:15:40.281 ]' 00:15:40.281 10:57:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.541 10:57:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.541 10:57:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.541 10:57:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:40.541 10:57:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.541 10:57:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.541 10:57:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.541 10:57:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.799 10:57:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:15:40.799 10:57:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:15:41.366 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.624 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:41.624 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.624 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.624 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.624 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.624 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:41.624 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:41.882 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:41.882 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.882 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:41.882 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:41.882 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:41.882 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.882 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.882 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.882 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.883 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.883 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.883 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.883 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.142 00:15:42.142 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.142 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.142 10:57:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.401 10:57:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.401 10:57:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.401 10:57:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.401 10:57:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.401 10:57:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.401 10:57:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.401 { 00:15:42.401 "cntlid": 27, 00:15:42.401 "qid": 0, 00:15:42.401 "state": "enabled", 00:15:42.401 "thread": "nvmf_tgt_poll_group_000", 00:15:42.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:15:42.401 "listen_address": { 00:15:42.401 "trtype": "RDMA", 00:15:42.401 "adrfam": "IPv4", 00:15:42.401 "traddr": "192.168.100.8", 00:15:42.401 "trsvcid": "4420" 00:15:42.401 }, 00:15:42.401 "peer_address": { 00:15:42.401 "trtype": "RDMA", 00:15:42.401 "adrfam": "IPv4", 00:15:42.401 "traddr": "192.168.100.8", 00:15:42.401 "trsvcid": "59245" 00:15:42.401 }, 00:15:42.401 "auth": { 00:15:42.401 "state": "completed", 00:15:42.401 "digest": "sha256", 00:15:42.401 "dhgroup": "ffdhe4096" 00:15:42.401 } 00:15:42.401 } 00:15:42.401 ]' 00:15:42.401 10:57:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.401 10:57:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.401 10:57:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.401 10:57:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:42.401 10:57:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.401 10:57:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.401 10:57:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.401 10:57:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.661 10:57:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:15:42.661 10:57:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:15:43.596 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.596 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:43.596 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.596 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.855 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.855 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.855 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:43.855 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:43.855 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:43.855 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.855 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:43.855 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:43.855 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:43.855 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.855 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.855 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.855 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.855 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.855 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.855 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.855 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.115 00:15:44.115 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.115 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.115 10:57:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.373 10:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.373 10:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.373 10:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.373 10:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.374 10:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.374 10:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.374 { 00:15:44.374 "cntlid": 29, 00:15:44.374 "qid": 0, 00:15:44.374 "state": "enabled", 00:15:44.374 "thread": "nvmf_tgt_poll_group_000", 00:15:44.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:15:44.374 "listen_address": { 00:15:44.374 "trtype": "RDMA", 00:15:44.374 "adrfam": "IPv4", 00:15:44.374 "traddr": "192.168.100.8", 00:15:44.374 "trsvcid": "4420" 00:15:44.374 }, 00:15:44.374 "peer_address": { 00:15:44.374 "trtype": "RDMA", 00:15:44.374 "adrfam": "IPv4", 00:15:44.374 "traddr": "192.168.100.8", 00:15:44.374 "trsvcid": "39366" 00:15:44.374 }, 00:15:44.374 "auth": { 00:15:44.374 "state": "completed", 00:15:44.374 "digest": "sha256", 00:15:44.374 "dhgroup": "ffdhe4096" 00:15:44.374 } 00:15:44.374 } 00:15:44.374 ]' 00:15:44.374 10:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.374 10:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.374 10:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.633 10:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:44.633 10:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.633 10:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.633 10:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.633 10:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.892 10:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:15:44.892 10:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:15:45.458 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.717 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:45.717 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.717 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.717 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.717 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.717 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:45.717 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:46.056 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:46.057 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.057 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:46.057 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:46.057 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:46.057 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.057 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:15:46.057 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.057 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.057 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.057 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:46.057 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:46.057 10:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:46.363 00:15:46.363 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.363 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.363 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.653 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.653 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.653 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.653 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.653 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.653 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.653 { 00:15:46.653 "cntlid": 31, 00:15:46.653 "qid": 0, 00:15:46.653 "state": "enabled", 00:15:46.653 "thread": "nvmf_tgt_poll_group_000", 00:15:46.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:15:46.653 "listen_address": { 00:15:46.653 "trtype": "RDMA", 00:15:46.653 "adrfam": "IPv4", 00:15:46.653 "traddr": "192.168.100.8", 00:15:46.653 "trsvcid": "4420" 00:15:46.653 }, 00:15:46.653 "peer_address": { 00:15:46.653 "trtype": "RDMA", 00:15:46.653 "adrfam": "IPv4", 00:15:46.653 "traddr": "192.168.100.8", 00:15:46.653 "trsvcid": "40534" 00:15:46.653 }, 00:15:46.653 "auth": { 00:15:46.653 "state": "completed", 00:15:46.653 "digest": "sha256", 00:15:46.653 "dhgroup": "ffdhe4096" 00:15:46.653 } 00:15:46.653 } 00:15:46.653 ]' 00:15:46.653 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.653 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.653 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.653 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:46.653 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.653 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.653 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.653 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.912 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:15:46.912 10:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:15:47.848 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.848 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:47.848 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.848 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.848 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.848 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:47.848 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.848 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:47.848 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:48.106 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:48.106 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.106 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:48.106 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:48.106 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:48.106 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.106 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.106 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.106 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.106 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.106 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.106 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.106 10:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.364 00:15:48.364 10:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.364 10:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.364 10:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.623 10:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.623 10:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.623 10:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.623 10:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.623 10:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.623 10:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.623 { 00:15:48.623 "cntlid": 33, 00:15:48.623 "qid": 0, 00:15:48.623 "state": "enabled", 00:15:48.623 "thread": "nvmf_tgt_poll_group_000", 00:15:48.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:15:48.623 "listen_address": { 00:15:48.623 "trtype": "RDMA", 00:15:48.623 "adrfam": "IPv4", 00:15:48.623 "traddr": "192.168.100.8", 00:15:48.623 "trsvcid": "4420" 00:15:48.623 }, 00:15:48.623 "peer_address": { 00:15:48.623 "trtype": "RDMA", 00:15:48.623 "adrfam": "IPv4", 00:15:48.623 "traddr": "192.168.100.8", 00:15:48.623 "trsvcid": "60689" 00:15:48.623 }, 00:15:48.623 "auth": { 00:15:48.623 "state": "completed", 00:15:48.623 "digest": "sha256", 00:15:48.623 "dhgroup": "ffdhe6144" 00:15:48.623 } 00:15:48.623 } 00:15:48.623 ]' 00:15:48.623 10:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.623 10:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:48.623 10:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.882 10:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:48.882 10:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.882 10:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.882 10:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.882 10:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.882 10:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:15:48.882 10:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:15:49.817 10:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.075 10:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:50.075 10:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.075 10:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.075 10:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.075 10:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.075 10:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:50.075 10:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:50.075 10:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:50.075 10:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.075 10:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:50.076 10:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:50.076 10:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:50.076 10:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.076 10:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.076 10:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.076 10:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.076 10:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.076 10:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.076 10:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.076 10:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.642 00:15:50.642 10:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.642 10:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.642 10:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.642 10:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.642 10:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.642 10:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.642 10:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.642 10:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.642 10:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.642 { 00:15:50.642 "cntlid": 35, 00:15:50.642 "qid": 0, 00:15:50.642 "state": "enabled", 00:15:50.642 "thread": "nvmf_tgt_poll_group_000", 00:15:50.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:15:50.643 "listen_address": { 00:15:50.643 "trtype": "RDMA", 00:15:50.643 "adrfam": "IPv4", 00:15:50.643 "traddr": "192.168.100.8", 00:15:50.643 "trsvcid": "4420" 00:15:50.643 }, 00:15:50.643 "peer_address": { 00:15:50.643 "trtype": "RDMA", 00:15:50.643 "adrfam": "IPv4", 00:15:50.643 "traddr": "192.168.100.8", 00:15:50.643 "trsvcid": "37137" 00:15:50.643 }, 00:15:50.643 "auth": { 00:15:50.643 "state": "completed", 00:15:50.643 "digest": "sha256", 00:15:50.643 "dhgroup": "ffdhe6144" 00:15:50.643 } 00:15:50.643 } 00:15:50.643 ]' 00:15:50.643 10:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.902 10:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.902 10:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.902 10:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:50.902 10:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.902 10:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.902 10:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.902 10:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.160 10:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:15:51.161 10:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:15:52.096 10:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.096 10:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:52.096 10:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.096 10:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.096 10:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.096 10:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.096 10:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:52.096 10:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:52.354 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:52.354 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.354 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:52.354 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:52.354 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:52.354 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.354 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.354 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.354 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.354 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.354 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.354 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.355 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.612 00:15:52.613 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.613 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.613 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.870 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.871 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.871 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.871 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.871 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.871 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.871 { 00:15:52.871 "cntlid": 37, 00:15:52.871 "qid": 0, 00:15:52.871 "state": "enabled", 00:15:52.871 "thread": "nvmf_tgt_poll_group_000", 00:15:52.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:15:52.871 "listen_address": { 00:15:52.871 "trtype": "RDMA", 00:15:52.871 "adrfam": "IPv4", 00:15:52.871 "traddr": "192.168.100.8", 00:15:52.871 "trsvcid": "4420" 00:15:52.871 }, 00:15:52.871 "peer_address": { 00:15:52.871 "trtype": "RDMA", 00:15:52.871 "adrfam": "IPv4", 00:15:52.871 "traddr": "192.168.100.8", 00:15:52.871 "trsvcid": "50708" 00:15:52.871 }, 00:15:52.871 "auth": { 00:15:52.871 "state": "completed", 00:15:52.871 "digest": "sha256", 00:15:52.871 "dhgroup": "ffdhe6144" 00:15:52.871 } 00:15:52.871 } 00:15:52.871 ]' 00:15:52.871 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.871 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.871 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.871 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:52.871 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.129 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.129 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.129 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.129 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:15:53.129 10:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:15:54.063 10:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.322 10:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:54.322 10:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.322 10:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.322 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.322 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.322 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:54.322 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:54.322 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:54.322 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.322 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:54.322 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:54.322 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:54.322 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.322 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:15:54.322 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.322 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.322 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.322 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:54.322 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.322 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.889 00:15:54.889 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.889 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.889 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.148 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.148 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.148 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.148 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.148 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.148 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.148 { 00:15:55.148 "cntlid": 39, 00:15:55.148 "qid": 0, 00:15:55.148 "state": "enabled", 00:15:55.148 "thread": "nvmf_tgt_poll_group_000", 00:15:55.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:15:55.148 "listen_address": { 00:15:55.148 "trtype": "RDMA", 00:15:55.148 "adrfam": "IPv4", 00:15:55.148 "traddr": "192.168.100.8", 00:15:55.148 "trsvcid": "4420" 00:15:55.148 }, 00:15:55.148 "peer_address": { 00:15:55.148 "trtype": "RDMA", 00:15:55.148 "adrfam": "IPv4", 00:15:55.148 "traddr": "192.168.100.8", 00:15:55.148 "trsvcid": "58269" 00:15:55.148 }, 00:15:55.148 "auth": { 00:15:55.148 "state": "completed", 00:15:55.148 "digest": "sha256", 00:15:55.148 "dhgroup": "ffdhe6144" 00:15:55.148 } 00:15:55.148 } 00:15:55.148 ]' 00:15:55.148 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.148 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.148 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.148 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:55.148 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.148 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.148 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.148 10:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.407 10:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:15:55.407 10:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:15:56.343 10:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.343 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:56.343 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.343 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.343 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.343 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:56.343 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.343 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:56.343 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:56.601 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:56.601 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.601 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:56.601 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:56.601 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:56.601 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.601 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.601 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.601 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.601 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.601 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.602 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.602 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.168 00:15:57.168 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.168 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.168 10:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.168 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.168 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.168 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.168 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.169 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.427 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.427 { 00:15:57.427 "cntlid": 41, 00:15:57.427 "qid": 0, 00:15:57.427 "state": "enabled", 00:15:57.427 "thread": "nvmf_tgt_poll_group_000", 00:15:57.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:15:57.427 "listen_address": { 00:15:57.427 "trtype": "RDMA", 00:15:57.427 "adrfam": "IPv4", 00:15:57.427 "traddr": "192.168.100.8", 00:15:57.428 "trsvcid": "4420" 00:15:57.428 }, 00:15:57.428 "peer_address": { 00:15:57.428 "trtype": "RDMA", 00:15:57.428 "adrfam": "IPv4", 00:15:57.428 "traddr": "192.168.100.8", 00:15:57.428 "trsvcid": "49555" 00:15:57.428 }, 00:15:57.428 "auth": { 00:15:57.428 "state": "completed", 00:15:57.428 "digest": "sha256", 00:15:57.428 "dhgroup": "ffdhe8192" 00:15:57.428 } 00:15:57.428 } 00:15:57.428 ]' 00:15:57.428 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.428 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.428 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.428 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:57.428 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.428 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.428 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.428 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.687 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:15:57.687 10:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:15:58.622 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.622 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:58.622 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.622 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.622 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.622 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.622 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:58.622 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:58.881 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:58.881 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.881 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:58.881 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:58.881 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:58.881 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.881 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.881 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.881 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.881 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.881 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.881 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.881 10:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.448 00:15:59.448 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.448 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.448 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.706 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.707 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.707 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.707 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.707 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.707 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.707 { 00:15:59.707 "cntlid": 43, 00:15:59.707 "qid": 0, 00:15:59.707 "state": "enabled", 00:15:59.707 "thread": "nvmf_tgt_poll_group_000", 00:15:59.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:15:59.707 "listen_address": { 00:15:59.707 "trtype": "RDMA", 00:15:59.707 "adrfam": "IPv4", 00:15:59.707 "traddr": "192.168.100.8", 00:15:59.707 "trsvcid": "4420" 00:15:59.707 }, 00:15:59.707 "peer_address": { 00:15:59.707 "trtype": "RDMA", 00:15:59.707 "adrfam": "IPv4", 00:15:59.707 "traddr": "192.168.100.8", 00:15:59.707 "trsvcid": "36806" 00:15:59.707 }, 00:15:59.707 "auth": { 00:15:59.707 "state": "completed", 00:15:59.707 "digest": "sha256", 00:15:59.707 "dhgroup": "ffdhe8192" 00:15:59.707 } 00:15:59.707 } 00:15:59.707 ]' 00:15:59.707 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.707 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.707 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.707 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:59.707 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.707 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.707 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.707 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.965 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:15:59.965 10:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:16:00.901 10:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.901 10:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:00.901 10:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.901 10:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.901 10:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.901 10:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.901 10:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:00.901 10:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:01.160 10:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:01.160 10:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.160 10:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:01.160 10:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:01.160 10:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:01.160 10:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.160 10:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.160 10:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.160 10:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.160 10:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.160 10:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.160 10:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.160 10:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.727 00:16:01.727 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.727 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.727 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.985 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.985 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.985 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.985 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.985 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.985 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.985 { 00:16:01.985 "cntlid": 45, 00:16:01.985 "qid": 0, 00:16:01.985 "state": "enabled", 00:16:01.985 "thread": "nvmf_tgt_poll_group_000", 00:16:01.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:01.985 "listen_address": { 00:16:01.985 "trtype": "RDMA", 00:16:01.985 "adrfam": "IPv4", 00:16:01.985 "traddr": "192.168.100.8", 00:16:01.985 "trsvcid": "4420" 00:16:01.985 }, 00:16:01.985 "peer_address": { 00:16:01.986 "trtype": "RDMA", 00:16:01.986 "adrfam": "IPv4", 00:16:01.986 "traddr": "192.168.100.8", 00:16:01.986 "trsvcid": "47119" 00:16:01.986 }, 00:16:01.986 "auth": { 00:16:01.986 "state": "completed", 00:16:01.986 "digest": "sha256", 00:16:01.986 "dhgroup": "ffdhe8192" 00:16:01.986 } 00:16:01.986 } 00:16:01.986 ]' 00:16:01.986 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.986 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.986 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.986 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:01.986 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.986 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.986 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.986 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.244 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:16:02.244 10:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:16:03.177 10:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.177 10:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:03.178 10:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.178 10:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.178 10:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.178 10:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.178 10:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:03.178 10:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:03.436 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:03.436 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.436 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:03.436 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:03.436 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:03.436 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.436 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:16:03.436 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.436 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.436 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.436 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:03.436 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:03.436 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.004 00:16:04.004 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.004 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.004 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.004 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.004 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.004 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.004 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.004 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.004 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.004 { 00:16:04.004 "cntlid": 47, 00:16:04.004 "qid": 0, 00:16:04.004 "state": "enabled", 00:16:04.004 "thread": "nvmf_tgt_poll_group_000", 00:16:04.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:04.004 "listen_address": { 00:16:04.004 "trtype": "RDMA", 00:16:04.004 "adrfam": "IPv4", 00:16:04.004 "traddr": "192.168.100.8", 00:16:04.004 "trsvcid": "4420" 00:16:04.004 }, 00:16:04.004 "peer_address": { 00:16:04.004 "trtype": "RDMA", 00:16:04.004 "adrfam": "IPv4", 00:16:04.004 "traddr": "192.168.100.8", 00:16:04.004 "trsvcid": "43973" 00:16:04.004 }, 00:16:04.004 "auth": { 00:16:04.004 "state": "completed", 00:16:04.004 "digest": "sha256", 00:16:04.004 "dhgroup": "ffdhe8192" 00:16:04.004 } 00:16:04.004 } 00:16:04.004 ]' 00:16:04.005 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.263 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:04.263 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.263 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:04.263 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.263 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.263 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.263 10:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.522 10:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:16:04.522 10:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:16:05.089 10:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.348 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:05.348 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.348 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.348 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.348 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:05.348 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.348 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.348 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:05.348 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:05.608 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:05.608 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.608 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:05.608 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:05.608 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:05.608 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.608 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.608 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.608 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.608 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.608 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.608 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.608 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.867 00:16:05.867 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.867 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.867 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.125 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.125 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.125 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.125 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.125 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.125 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.125 { 00:16:06.125 "cntlid": 49, 00:16:06.125 "qid": 0, 00:16:06.125 "state": "enabled", 00:16:06.125 "thread": "nvmf_tgt_poll_group_000", 00:16:06.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:06.125 "listen_address": { 00:16:06.125 "trtype": "RDMA", 00:16:06.125 "adrfam": "IPv4", 00:16:06.125 "traddr": "192.168.100.8", 00:16:06.125 "trsvcid": "4420" 00:16:06.125 }, 00:16:06.125 "peer_address": { 00:16:06.125 "trtype": "RDMA", 00:16:06.125 "adrfam": "IPv4", 00:16:06.125 "traddr": "192.168.100.8", 00:16:06.125 "trsvcid": "46312" 00:16:06.125 }, 00:16:06.125 "auth": { 00:16:06.125 "state": "completed", 00:16:06.125 "digest": "sha384", 00:16:06.125 "dhgroup": "null" 00:16:06.125 } 00:16:06.125 } 00:16:06.125 ]' 00:16:06.125 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.125 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.125 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.125 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:06.125 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.125 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.125 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.125 10:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.383 10:57:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:16:06.383 10:57:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:16:07.319 10:57:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.319 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:07.319 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.319 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.578 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.578 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.578 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:07.578 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:07.578 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:07.578 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.578 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:07.578 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:07.578 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:07.578 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.578 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.578 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.578 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.578 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.578 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.578 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.578 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.836 00:16:07.836 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.836 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.836 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.093 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.093 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.093 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.093 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.093 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.094 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.094 { 00:16:08.094 "cntlid": 51, 00:16:08.094 "qid": 0, 00:16:08.094 "state": "enabled", 00:16:08.094 "thread": "nvmf_tgt_poll_group_000", 00:16:08.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:08.094 "listen_address": { 00:16:08.094 "trtype": "RDMA", 00:16:08.094 "adrfam": "IPv4", 00:16:08.094 "traddr": "192.168.100.8", 00:16:08.094 "trsvcid": "4420" 00:16:08.094 }, 00:16:08.094 "peer_address": { 00:16:08.094 "trtype": "RDMA", 00:16:08.094 "adrfam": "IPv4", 00:16:08.094 "traddr": "192.168.100.8", 00:16:08.094 "trsvcid": "58231" 00:16:08.094 }, 00:16:08.094 "auth": { 00:16:08.094 "state": "completed", 00:16:08.094 "digest": "sha384", 00:16:08.094 "dhgroup": "null" 00:16:08.094 } 00:16:08.094 } 00:16:08.094 ]' 00:16:08.094 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.094 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.094 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.094 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:08.094 10:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.352 10:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.352 10:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.352 10:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.352 10:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:16:08.352 10:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:16:09.288 10:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.547 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:09.547 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.547 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.547 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.547 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.547 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:09.547 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:09.547 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:09.547 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.547 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:09.547 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:09.547 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:09.547 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.547 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.547 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.547 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.547 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.547 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.547 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.547 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.805 00:16:09.805 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.805 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.805 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.064 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.064 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.064 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.064 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.064 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.064 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.064 { 00:16:10.064 "cntlid": 53, 00:16:10.064 "qid": 0, 00:16:10.064 "state": "enabled", 00:16:10.064 "thread": "nvmf_tgt_poll_group_000", 00:16:10.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:10.064 "listen_address": { 00:16:10.064 "trtype": "RDMA", 00:16:10.064 "adrfam": "IPv4", 00:16:10.064 "traddr": "192.168.100.8", 00:16:10.064 "trsvcid": "4420" 00:16:10.064 }, 00:16:10.064 "peer_address": { 00:16:10.064 "trtype": "RDMA", 00:16:10.064 "adrfam": "IPv4", 00:16:10.064 "traddr": "192.168.100.8", 00:16:10.064 "trsvcid": "59529" 00:16:10.064 }, 00:16:10.064 "auth": { 00:16:10.064 "state": "completed", 00:16:10.064 "digest": "sha384", 00:16:10.064 "dhgroup": "null" 00:16:10.064 } 00:16:10.064 } 00:16:10.064 ]' 00:16:10.064 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.064 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.064 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.064 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:10.064 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.322 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.322 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.322 10:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.322 10:57:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:16:10.322 10:57:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:16:11.258 10:57:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.516 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:11.516 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.516 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.516 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.516 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.516 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:11.516 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:11.775 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:11.775 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.775 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.775 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:11.775 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:11.775 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.775 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:16:11.775 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.775 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.775 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.775 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:11.775 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.775 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.033 00:16:12.033 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.033 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.033 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.033 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.033 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.033 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.033 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.033 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.033 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.033 { 00:16:12.033 "cntlid": 55, 00:16:12.033 "qid": 0, 00:16:12.033 "state": "enabled", 00:16:12.033 "thread": "nvmf_tgt_poll_group_000", 00:16:12.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:12.033 "listen_address": { 00:16:12.033 "trtype": "RDMA", 00:16:12.033 "adrfam": "IPv4", 00:16:12.033 "traddr": "192.168.100.8", 00:16:12.033 "trsvcid": "4420" 00:16:12.033 }, 00:16:12.033 "peer_address": { 00:16:12.033 "trtype": "RDMA", 00:16:12.033 "adrfam": "IPv4", 00:16:12.033 "traddr": "192.168.100.8", 00:16:12.033 "trsvcid": "33811" 00:16:12.033 }, 00:16:12.033 "auth": { 00:16:12.033 "state": "completed", 00:16:12.033 "digest": "sha384", 00:16:12.033 "dhgroup": "null" 00:16:12.033 } 00:16:12.033 } 00:16:12.033 ]' 00:16:12.033 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.291 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.291 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.291 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:12.291 10:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.291 10:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.291 10:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.291 10:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.549 10:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:16:12.549 10:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:16:13.486 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.486 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:13.486 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.486 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.486 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.486 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.486 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.486 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:13.486 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:13.745 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:13.745 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.745 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.745 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:13.745 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:13.745 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.745 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.745 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.745 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.745 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.745 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.745 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.745 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.003 00:16:14.003 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.003 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.003 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.261 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.261 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.261 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.261 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.261 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.261 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.261 { 00:16:14.261 "cntlid": 57, 00:16:14.261 "qid": 0, 00:16:14.261 "state": "enabled", 00:16:14.261 "thread": "nvmf_tgt_poll_group_000", 00:16:14.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:14.261 "listen_address": { 00:16:14.261 "trtype": "RDMA", 00:16:14.261 "adrfam": "IPv4", 00:16:14.261 "traddr": "192.168.100.8", 00:16:14.261 "trsvcid": "4420" 00:16:14.261 }, 00:16:14.261 "peer_address": { 00:16:14.261 "trtype": "RDMA", 00:16:14.261 "adrfam": "IPv4", 00:16:14.261 "traddr": "192.168.100.8", 00:16:14.261 "trsvcid": "40945" 00:16:14.261 }, 00:16:14.261 "auth": { 00:16:14.261 "state": "completed", 00:16:14.261 "digest": "sha384", 00:16:14.261 "dhgroup": "ffdhe2048" 00:16:14.261 } 00:16:14.261 } 00:16:14.261 ]' 00:16:14.261 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.261 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.261 10:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.261 10:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:14.261 10:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.261 10:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.261 10:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.262 10:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.519 10:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:16:14.519 10:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:16:15.455 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.455 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:15.455 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.455 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.455 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.455 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.455 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:15.455 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:15.714 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:15.714 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.714 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:15.714 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:15.714 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:15.714 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.714 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.714 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.714 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.714 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.714 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.714 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.714 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.974 00:16:15.974 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.974 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.974 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.232 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.232 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.232 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.232 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.232 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.232 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.232 { 00:16:16.232 "cntlid": 59, 00:16:16.232 "qid": 0, 00:16:16.232 "state": "enabled", 00:16:16.232 "thread": "nvmf_tgt_poll_group_000", 00:16:16.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:16.232 "listen_address": { 00:16:16.232 "trtype": "RDMA", 00:16:16.232 "adrfam": "IPv4", 00:16:16.232 "traddr": "192.168.100.8", 00:16:16.232 "trsvcid": "4420" 00:16:16.232 }, 00:16:16.232 "peer_address": { 00:16:16.232 "trtype": "RDMA", 00:16:16.232 "adrfam": "IPv4", 00:16:16.232 "traddr": "192.168.100.8", 00:16:16.232 "trsvcid": "47438" 00:16:16.232 }, 00:16:16.232 "auth": { 00:16:16.232 "state": "completed", 00:16:16.232 "digest": "sha384", 00:16:16.232 "dhgroup": "ffdhe2048" 00:16:16.232 } 00:16:16.232 } 00:16:16.232 ]' 00:16:16.232 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.232 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.232 10:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.232 10:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:16.232 10:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.232 10:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.232 10:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.232 10:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.491 10:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:16:16.491 10:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:16:17.426 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.426 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:17.426 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.426 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.426 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.426 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.426 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:17.426 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:17.685 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:17.685 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.685 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:17.685 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:17.685 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:17.685 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.685 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.685 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.685 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.685 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.685 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.685 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.685 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.944 00:16:17.944 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.944 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.944 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.203 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.203 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.203 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.203 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.203 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.203 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.203 { 00:16:18.203 "cntlid": 61, 00:16:18.203 "qid": 0, 00:16:18.203 "state": "enabled", 00:16:18.203 "thread": "nvmf_tgt_poll_group_000", 00:16:18.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:18.203 "listen_address": { 00:16:18.203 "trtype": "RDMA", 00:16:18.203 "adrfam": "IPv4", 00:16:18.203 "traddr": "192.168.100.8", 00:16:18.203 "trsvcid": "4420" 00:16:18.203 }, 00:16:18.203 "peer_address": { 00:16:18.203 "trtype": "RDMA", 00:16:18.203 "adrfam": "IPv4", 00:16:18.203 "traddr": "192.168.100.8", 00:16:18.203 "trsvcid": "54096" 00:16:18.203 }, 00:16:18.203 "auth": { 00:16:18.203 "state": "completed", 00:16:18.203 "digest": "sha384", 00:16:18.203 "dhgroup": "ffdhe2048" 00:16:18.203 } 00:16:18.203 } 00:16:18.203 ]' 00:16:18.203 10:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.203 10:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.203 10:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.203 10:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:18.203 10:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.461 10:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.461 10:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.461 10:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.461 10:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:16:18.461 10:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:16:19.396 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.655 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:19.655 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.655 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.655 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.655 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.655 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:19.655 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:19.655 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:19.655 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.655 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.655 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:19.655 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:19.655 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.655 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:16:19.655 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.655 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.655 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.655 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:19.655 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.655 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.913 00:16:19.913 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.913 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.913 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.172 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.172 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.172 10:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.172 10:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.172 10:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.172 10:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.172 { 00:16:20.172 "cntlid": 63, 00:16:20.172 "qid": 0, 00:16:20.172 "state": "enabled", 00:16:20.172 "thread": "nvmf_tgt_poll_group_000", 00:16:20.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:20.172 "listen_address": { 00:16:20.172 "trtype": "RDMA", 00:16:20.172 "adrfam": "IPv4", 00:16:20.172 "traddr": "192.168.100.8", 00:16:20.172 "trsvcid": "4420" 00:16:20.172 }, 00:16:20.172 "peer_address": { 00:16:20.172 "trtype": "RDMA", 00:16:20.172 "adrfam": "IPv4", 00:16:20.172 "traddr": "192.168.100.8", 00:16:20.173 "trsvcid": "54428" 00:16:20.173 }, 00:16:20.173 "auth": { 00:16:20.173 "state": "completed", 00:16:20.173 "digest": "sha384", 00:16:20.173 "dhgroup": "ffdhe2048" 00:16:20.173 } 00:16:20.173 } 00:16:20.173 ]' 00:16:20.173 10:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.173 10:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.173 10:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.432 10:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:20.432 10:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.432 10:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.432 10:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.432 10:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.691 10:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:16:20.691 10:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:16:21.257 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.515 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:21.515 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.515 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.515 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.515 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.515 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.515 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:21.515 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:21.774 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:21.774 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.774 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:21.774 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:21.774 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:21.774 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.774 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.774 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.774 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.774 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.774 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.774 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.774 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.032 00:16:22.032 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.032 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.032 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.292 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.292 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.292 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.292 10:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.292 10:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.292 10:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.292 { 00:16:22.292 "cntlid": 65, 00:16:22.292 "qid": 0, 00:16:22.292 "state": "enabled", 00:16:22.292 "thread": "nvmf_tgt_poll_group_000", 00:16:22.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:22.292 "listen_address": { 00:16:22.292 "trtype": "RDMA", 00:16:22.292 "adrfam": "IPv4", 00:16:22.292 "traddr": "192.168.100.8", 00:16:22.292 "trsvcid": "4420" 00:16:22.292 }, 00:16:22.292 "peer_address": { 00:16:22.292 "trtype": "RDMA", 00:16:22.292 "adrfam": "IPv4", 00:16:22.292 "traddr": "192.168.100.8", 00:16:22.292 "trsvcid": "37613" 00:16:22.292 }, 00:16:22.292 "auth": { 00:16:22.292 "state": "completed", 00:16:22.292 "digest": "sha384", 00:16:22.292 "dhgroup": "ffdhe3072" 00:16:22.292 } 00:16:22.292 } 00:16:22.292 ]' 00:16:22.292 10:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.292 10:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.292 10:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.292 10:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:22.292 10:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.292 10:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.292 10:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.292 10:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.551 10:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:16:22.551 10:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:16:23.484 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.484 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:23.484 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.484 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.484 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.484 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.484 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:23.484 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:23.741 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:23.741 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.741 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:23.741 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:23.741 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:23.741 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.741 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.741 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.741 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.741 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.741 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.741 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.741 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.052 00:16:24.052 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.052 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.052 10:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.364 10:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.364 10:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.364 10:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.364 10:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.364 10:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.364 10:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.364 { 00:16:24.364 "cntlid": 67, 00:16:24.364 "qid": 0, 00:16:24.364 "state": "enabled", 00:16:24.364 "thread": "nvmf_tgt_poll_group_000", 00:16:24.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:24.364 "listen_address": { 00:16:24.364 "trtype": "RDMA", 00:16:24.364 "adrfam": "IPv4", 00:16:24.364 "traddr": "192.168.100.8", 00:16:24.364 "trsvcid": "4420" 00:16:24.364 }, 00:16:24.364 "peer_address": { 00:16:24.364 "trtype": "RDMA", 00:16:24.364 "adrfam": "IPv4", 00:16:24.364 "traddr": "192.168.100.8", 00:16:24.364 "trsvcid": "38430" 00:16:24.364 }, 00:16:24.364 "auth": { 00:16:24.364 "state": "completed", 00:16:24.364 "digest": "sha384", 00:16:24.364 "dhgroup": "ffdhe3072" 00:16:24.364 } 00:16:24.364 } 00:16:24.364 ]' 00:16:24.364 10:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.364 10:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.364 10:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.364 10:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:24.364 10:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.364 10:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.364 10:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.364 10:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.626 10:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:16:24.626 10:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:16:25.560 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.560 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:25.560 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.560 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.560 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.561 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.561 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:25.561 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:25.819 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:25.819 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.819 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:25.819 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:25.819 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:25.819 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.819 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.819 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.819 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.819 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.819 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.819 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.819 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.077 00:16:26.077 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.077 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.077 10:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.336 10:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.336 10:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.336 10:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.336 10:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.336 10:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.336 10:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.336 { 00:16:26.336 "cntlid": 69, 00:16:26.336 "qid": 0, 00:16:26.336 "state": "enabled", 00:16:26.336 "thread": "nvmf_tgt_poll_group_000", 00:16:26.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:26.336 "listen_address": { 00:16:26.336 "trtype": "RDMA", 00:16:26.336 "adrfam": "IPv4", 00:16:26.336 "traddr": "192.168.100.8", 00:16:26.336 "trsvcid": "4420" 00:16:26.336 }, 00:16:26.336 "peer_address": { 00:16:26.336 "trtype": "RDMA", 00:16:26.336 "adrfam": "IPv4", 00:16:26.336 "traddr": "192.168.100.8", 00:16:26.336 "trsvcid": "39218" 00:16:26.336 }, 00:16:26.336 "auth": { 00:16:26.336 "state": "completed", 00:16:26.336 "digest": "sha384", 00:16:26.336 "dhgroup": "ffdhe3072" 00:16:26.336 } 00:16:26.336 } 00:16:26.336 ]' 00:16:26.336 10:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.336 10:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.336 10:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.336 10:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:26.336 10:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.336 10:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.336 10:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.336 10:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.594 10:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:16:26.594 10:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:16:27.527 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.527 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:27.527 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.527 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.527 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.527 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.527 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:27.527 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:27.785 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:27.785 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.785 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:27.785 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:27.785 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:27.785 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.785 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:16:27.785 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.785 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.785 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.785 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:27.785 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.785 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.043 00:16:28.043 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.043 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.043 10:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.302 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.302 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.302 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.302 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.302 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.302 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.302 { 00:16:28.302 "cntlid": 71, 00:16:28.302 "qid": 0, 00:16:28.302 "state": "enabled", 00:16:28.302 "thread": "nvmf_tgt_poll_group_000", 00:16:28.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:28.302 "listen_address": { 00:16:28.302 "trtype": "RDMA", 00:16:28.302 "adrfam": "IPv4", 00:16:28.302 "traddr": "192.168.100.8", 00:16:28.302 "trsvcid": "4420" 00:16:28.302 }, 00:16:28.302 "peer_address": { 00:16:28.302 "trtype": "RDMA", 00:16:28.302 "adrfam": "IPv4", 00:16:28.302 "traddr": "192.168.100.8", 00:16:28.302 "trsvcid": "58471" 00:16:28.302 }, 00:16:28.302 "auth": { 00:16:28.302 "state": "completed", 00:16:28.302 "digest": "sha384", 00:16:28.302 "dhgroup": "ffdhe3072" 00:16:28.302 } 00:16:28.302 } 00:16:28.302 ]' 00:16:28.302 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.302 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.302 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.302 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:28.302 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.561 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.561 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.561 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.561 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:16:28.561 10:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:16:29.496 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.496 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:29.496 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.496 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.496 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.496 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.496 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.496 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:29.496 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:29.754 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:29.754 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.754 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:29.754 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:29.754 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:29.754 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.754 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.754 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.754 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.754 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.754 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.754 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.754 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.013 00:16:30.013 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.013 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.013 10:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.271 10:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.271 10:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.271 10:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.271 10:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.271 10:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.271 10:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.271 { 00:16:30.271 "cntlid": 73, 00:16:30.271 "qid": 0, 00:16:30.271 "state": "enabled", 00:16:30.271 "thread": "nvmf_tgt_poll_group_000", 00:16:30.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:30.271 "listen_address": { 00:16:30.271 "trtype": "RDMA", 00:16:30.271 "adrfam": "IPv4", 00:16:30.271 "traddr": "192.168.100.8", 00:16:30.271 "trsvcid": "4420" 00:16:30.271 }, 00:16:30.271 "peer_address": { 00:16:30.271 "trtype": "RDMA", 00:16:30.271 "adrfam": "IPv4", 00:16:30.271 "traddr": "192.168.100.8", 00:16:30.271 "trsvcid": "59669" 00:16:30.271 }, 00:16:30.271 "auth": { 00:16:30.271 "state": "completed", 00:16:30.271 "digest": "sha384", 00:16:30.271 "dhgroup": "ffdhe4096" 00:16:30.271 } 00:16:30.271 } 00:16:30.271 ]' 00:16:30.271 10:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.271 10:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.271 10:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.529 10:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.529 10:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.529 10:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.529 10:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.529 10:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.529 10:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:16:30.529 10:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:16:31.464 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.723 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:31.723 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.723 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.723 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.723 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.723 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:31.723 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:31.723 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:31.723 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.723 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:31.723 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:31.723 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:31.723 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.723 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.723 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.723 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.980 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.980 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.980 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.980 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.238 00:16:32.238 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.238 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.238 10:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.238 10:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.238 10:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.238 10:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.238 10:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.497 10:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.497 10:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.497 { 00:16:32.497 "cntlid": 75, 00:16:32.497 "qid": 0, 00:16:32.497 "state": "enabled", 00:16:32.497 "thread": "nvmf_tgt_poll_group_000", 00:16:32.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:32.497 "listen_address": { 00:16:32.497 "trtype": "RDMA", 00:16:32.497 "adrfam": "IPv4", 00:16:32.497 "traddr": "192.168.100.8", 00:16:32.497 "trsvcid": "4420" 00:16:32.497 }, 00:16:32.497 "peer_address": { 00:16:32.497 "trtype": "RDMA", 00:16:32.497 "adrfam": "IPv4", 00:16:32.497 "traddr": "192.168.100.8", 00:16:32.497 "trsvcid": "33085" 00:16:32.497 }, 00:16:32.497 "auth": { 00:16:32.497 "state": "completed", 00:16:32.497 "digest": "sha384", 00:16:32.497 "dhgroup": "ffdhe4096" 00:16:32.497 } 00:16:32.497 } 00:16:32.497 ]' 00:16:32.497 10:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.497 10:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:32.497 10:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.497 10:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:32.497 10:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.497 10:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.497 10:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.497 10:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.754 10:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:16:32.754 10:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:16:33.687 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.687 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:33.687 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.687 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.687 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.687 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.687 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:33.687 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:33.945 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:33.945 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.945 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:33.945 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:33.945 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:33.945 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.945 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.945 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.945 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.945 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.945 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.945 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.945 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.202 00:16:34.202 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.202 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.202 10:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.460 10:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.460 10:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.460 10:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.460 10:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.460 10:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.460 10:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.460 { 00:16:34.460 "cntlid": 77, 00:16:34.460 "qid": 0, 00:16:34.460 "state": "enabled", 00:16:34.460 "thread": "nvmf_tgt_poll_group_000", 00:16:34.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:34.460 "listen_address": { 00:16:34.460 "trtype": "RDMA", 00:16:34.460 "adrfam": "IPv4", 00:16:34.460 "traddr": "192.168.100.8", 00:16:34.460 "trsvcid": "4420" 00:16:34.460 }, 00:16:34.460 "peer_address": { 00:16:34.460 "trtype": "RDMA", 00:16:34.460 "adrfam": "IPv4", 00:16:34.460 "traddr": "192.168.100.8", 00:16:34.460 "trsvcid": "54159" 00:16:34.460 }, 00:16:34.460 "auth": { 00:16:34.460 "state": "completed", 00:16:34.460 "digest": "sha384", 00:16:34.460 "dhgroup": "ffdhe4096" 00:16:34.460 } 00:16:34.460 } 00:16:34.460 ]' 00:16:34.460 10:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.460 10:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.460 10:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.460 10:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:34.460 10:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.460 10:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.460 10:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.460 10:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.717 10:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:16:34.717 10:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:16:35.648 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.648 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:35.648 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.648 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.648 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.648 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.648 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:35.648 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:35.906 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:35.906 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.906 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:35.906 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:35.906 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:35.906 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.906 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:16:35.906 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.906 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.906 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.906 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:35.906 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.906 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.163 00:16:36.163 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.163 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.163 10:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.422 10:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.422 10:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.422 10:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.422 10:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.422 10:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.422 10:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.422 { 00:16:36.422 "cntlid": 79, 00:16:36.422 "qid": 0, 00:16:36.422 "state": "enabled", 00:16:36.422 "thread": "nvmf_tgt_poll_group_000", 00:16:36.422 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:36.422 "listen_address": { 00:16:36.422 "trtype": "RDMA", 00:16:36.422 "adrfam": "IPv4", 00:16:36.422 "traddr": "192.168.100.8", 00:16:36.422 "trsvcid": "4420" 00:16:36.422 }, 00:16:36.422 "peer_address": { 00:16:36.422 "trtype": "RDMA", 00:16:36.422 "adrfam": "IPv4", 00:16:36.422 "traddr": "192.168.100.8", 00:16:36.422 "trsvcid": "41793" 00:16:36.422 }, 00:16:36.422 "auth": { 00:16:36.422 "state": "completed", 00:16:36.422 "digest": "sha384", 00:16:36.422 "dhgroup": "ffdhe4096" 00:16:36.422 } 00:16:36.422 } 00:16:36.422 ]' 00:16:36.422 10:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.422 10:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.422 10:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.422 10:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:36.422 10:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.422 10:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.422 10:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.422 10:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.679 10:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:16:36.679 10:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:16:37.612 10:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.612 10:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:37.612 10:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.612 10:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.612 10:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.612 10:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:37.612 10:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.612 10:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:37.613 10:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:37.870 10:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:37.870 10:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.870 10:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:37.870 10:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:37.870 10:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:37.870 10:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.870 10:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.870 10:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.870 10:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.870 10:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.870 10:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.870 10:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.870 10:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.435 00:16:38.435 10:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.435 10:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.435 10:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.435 10:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.435 10:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.435 10:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.435 10:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.435 10:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.435 10:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.435 { 00:16:38.435 "cntlid": 81, 00:16:38.435 "qid": 0, 00:16:38.435 "state": "enabled", 00:16:38.435 "thread": "nvmf_tgt_poll_group_000", 00:16:38.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:38.435 "listen_address": { 00:16:38.436 "trtype": "RDMA", 00:16:38.436 "adrfam": "IPv4", 00:16:38.436 "traddr": "192.168.100.8", 00:16:38.436 "trsvcid": "4420" 00:16:38.436 }, 00:16:38.436 "peer_address": { 00:16:38.436 "trtype": "RDMA", 00:16:38.436 "adrfam": "IPv4", 00:16:38.436 "traddr": "192.168.100.8", 00:16:38.436 "trsvcid": "54955" 00:16:38.436 }, 00:16:38.436 "auth": { 00:16:38.436 "state": "completed", 00:16:38.436 "digest": "sha384", 00:16:38.436 "dhgroup": "ffdhe6144" 00:16:38.436 } 00:16:38.436 } 00:16:38.436 ]' 00:16:38.436 10:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.693 10:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.693 10:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.693 10:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:38.693 10:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.693 10:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.693 10:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.693 10:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.951 10:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:16:38.951 10:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:16:39.516 10:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.774 10:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:39.774 10:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.774 10:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.774 10:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.774 10:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.774 10:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:39.774 10:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:40.032 10:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:40.032 10:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.032 10:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:40.032 10:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:40.032 10:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:40.032 10:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.032 10:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.032 10:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.032 10:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.032 10:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.032 10:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.032 10:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.032 10:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.291 00:16:40.291 10:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.291 10:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.291 10:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.549 10:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.549 10:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.549 10:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.549 10:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.549 10:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.549 10:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.549 { 00:16:40.549 "cntlid": 83, 00:16:40.549 "qid": 0, 00:16:40.549 "state": "enabled", 00:16:40.549 "thread": "nvmf_tgt_poll_group_000", 00:16:40.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:40.549 "listen_address": { 00:16:40.549 "trtype": "RDMA", 00:16:40.549 "adrfam": "IPv4", 00:16:40.549 "traddr": "192.168.100.8", 00:16:40.549 "trsvcid": "4420" 00:16:40.549 }, 00:16:40.549 "peer_address": { 00:16:40.549 "trtype": "RDMA", 00:16:40.549 "adrfam": "IPv4", 00:16:40.549 "traddr": "192.168.100.8", 00:16:40.549 "trsvcid": "40488" 00:16:40.549 }, 00:16:40.549 "auth": { 00:16:40.549 "state": "completed", 00:16:40.549 "digest": "sha384", 00:16:40.549 "dhgroup": "ffdhe6144" 00:16:40.549 } 00:16:40.549 } 00:16:40.549 ]' 00:16:40.549 10:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.549 10:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.549 10:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.549 10:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:40.807 10:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.807 10:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.807 10:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.807 10:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.807 10:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:16:40.807 10:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:16:41.741 10:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.998 10:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:41.998 10:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.998 10:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.999 10:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.999 10:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.999 10:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:41.999 10:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:41.999 10:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:41.999 10:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.999 10:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:41.999 10:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:41.999 10:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:41.999 10:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.999 10:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.999 10:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.999 10:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.256 10:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.256 10:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.256 10:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.256 10:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.514 00:16:42.514 10:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.514 10:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.514 10:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.771 10:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.771 10:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.772 10:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.772 10:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.772 10:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.772 10:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.772 { 00:16:42.772 "cntlid": 85, 00:16:42.772 "qid": 0, 00:16:42.772 "state": "enabled", 00:16:42.772 "thread": "nvmf_tgt_poll_group_000", 00:16:42.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:42.772 "listen_address": { 00:16:42.772 "trtype": "RDMA", 00:16:42.772 "adrfam": "IPv4", 00:16:42.772 "traddr": "192.168.100.8", 00:16:42.772 "trsvcid": "4420" 00:16:42.772 }, 00:16:42.772 "peer_address": { 00:16:42.772 "trtype": "RDMA", 00:16:42.772 "adrfam": "IPv4", 00:16:42.772 "traddr": "192.168.100.8", 00:16:42.772 "trsvcid": "50037" 00:16:42.772 }, 00:16:42.772 "auth": { 00:16:42.772 "state": "completed", 00:16:42.772 "digest": "sha384", 00:16:42.772 "dhgroup": "ffdhe6144" 00:16:42.772 } 00:16:42.772 } 00:16:42.772 ]' 00:16:42.772 10:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.772 10:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.772 10:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.772 10:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:42.772 10:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.772 10:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.772 10:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.772 10:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.030 10:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:16:43.030 10:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:16:43.963 10:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.963 10:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:43.963 10:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.963 10:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.963 10:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.963 10:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.963 10:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:43.963 10:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:44.220 10:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:44.220 10:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.220 10:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:44.221 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:44.221 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:44.221 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.221 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:16:44.221 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.221 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.221 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.221 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:44.221 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.221 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.478 00:16:44.736 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.736 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.736 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.736 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.736 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.736 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.736 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.736 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.736 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.736 { 00:16:44.736 "cntlid": 87, 00:16:44.736 "qid": 0, 00:16:44.736 "state": "enabled", 00:16:44.736 "thread": "nvmf_tgt_poll_group_000", 00:16:44.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:44.736 "listen_address": { 00:16:44.736 "trtype": "RDMA", 00:16:44.736 "adrfam": "IPv4", 00:16:44.736 "traddr": "192.168.100.8", 00:16:44.736 "trsvcid": "4420" 00:16:44.736 }, 00:16:44.736 "peer_address": { 00:16:44.736 "trtype": "RDMA", 00:16:44.736 "adrfam": "IPv4", 00:16:44.736 "traddr": "192.168.100.8", 00:16:44.736 "trsvcid": "43174" 00:16:44.736 }, 00:16:44.736 "auth": { 00:16:44.736 "state": "completed", 00:16:44.736 "digest": "sha384", 00:16:44.736 "dhgroup": "ffdhe6144" 00:16:44.736 } 00:16:44.736 } 00:16:44.736 ]' 00:16:44.736 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.736 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.995 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.995 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:44.995 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.995 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.995 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.995 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.253 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:16:45.253 10:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:16:45.818 10:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.076 10:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:46.076 10:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.076 10:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.076 10:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.076 10:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.076 10:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.076 10:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:46.076 10:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:46.334 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:46.334 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.334 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:46.334 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:46.334 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:46.334 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.334 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.334 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.334 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.334 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.334 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.334 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.334 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.901 00:16:46.901 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.901 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.901 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.159 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.160 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.160 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.160 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.160 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.160 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.160 { 00:16:47.160 "cntlid": 89, 00:16:47.160 "qid": 0, 00:16:47.160 "state": "enabled", 00:16:47.160 "thread": "nvmf_tgt_poll_group_000", 00:16:47.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:47.160 "listen_address": { 00:16:47.160 "trtype": "RDMA", 00:16:47.160 "adrfam": "IPv4", 00:16:47.160 "traddr": "192.168.100.8", 00:16:47.160 "trsvcid": "4420" 00:16:47.160 }, 00:16:47.160 "peer_address": { 00:16:47.160 "trtype": "RDMA", 00:16:47.160 "adrfam": "IPv4", 00:16:47.160 "traddr": "192.168.100.8", 00:16:47.160 "trsvcid": "36011" 00:16:47.160 }, 00:16:47.160 "auth": { 00:16:47.160 "state": "completed", 00:16:47.160 "digest": "sha384", 00:16:47.160 "dhgroup": "ffdhe8192" 00:16:47.160 } 00:16:47.160 } 00:16:47.160 ]' 00:16:47.160 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.160 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.160 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.160 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.160 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.160 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.160 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.160 10:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.418 10:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:16:47.418 10:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:16:48.352 10:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.352 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:48.352 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.352 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.352 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.352 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.352 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:48.352 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:48.610 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:48.610 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.610 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:48.610 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:48.610 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:48.610 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.610 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.610 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.610 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.610 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.610 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.610 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.610 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.176 00:16:49.176 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.176 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.176 10:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.176 10:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.176 10:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.176 10:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.176 10:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.176 10:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.176 10:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.176 { 00:16:49.176 "cntlid": 91, 00:16:49.176 "qid": 0, 00:16:49.176 "state": "enabled", 00:16:49.176 "thread": "nvmf_tgt_poll_group_000", 00:16:49.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:49.176 "listen_address": { 00:16:49.176 "trtype": "RDMA", 00:16:49.176 "adrfam": "IPv4", 00:16:49.176 "traddr": "192.168.100.8", 00:16:49.176 "trsvcid": "4420" 00:16:49.176 }, 00:16:49.176 "peer_address": { 00:16:49.176 "trtype": "RDMA", 00:16:49.176 "adrfam": "IPv4", 00:16:49.176 "traddr": "192.168.100.8", 00:16:49.176 "trsvcid": "46294" 00:16:49.176 }, 00:16:49.176 "auth": { 00:16:49.176 "state": "completed", 00:16:49.176 "digest": "sha384", 00:16:49.176 "dhgroup": "ffdhe8192" 00:16:49.176 } 00:16:49.176 } 00:16:49.176 ]' 00:16:49.176 10:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.434 10:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.434 10:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.434 10:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:49.434 10:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.434 10:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.434 10:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.435 10:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.693 10:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:16:49.693 10:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:16:50.257 10:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.515 10:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:50.515 10:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.515 10:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.515 10:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.515 10:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.515 10:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:50.515 10:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:50.773 10:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:50.773 10:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.773 10:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:50.773 10:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:50.773 10:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:50.773 10:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.773 10:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.774 10:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.774 10:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.774 10:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.774 10:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.774 10:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.774 10:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.340 00:16:51.340 10:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.340 10:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.340 10:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.599 10:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.599 10:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.599 10:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.599 10:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.599 10:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.599 10:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.599 { 00:16:51.599 "cntlid": 93, 00:16:51.599 "qid": 0, 00:16:51.599 "state": "enabled", 00:16:51.599 "thread": "nvmf_tgt_poll_group_000", 00:16:51.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:51.599 "listen_address": { 00:16:51.599 "trtype": "RDMA", 00:16:51.599 "adrfam": "IPv4", 00:16:51.599 "traddr": "192.168.100.8", 00:16:51.599 "trsvcid": "4420" 00:16:51.599 }, 00:16:51.599 "peer_address": { 00:16:51.599 "trtype": "RDMA", 00:16:51.599 "adrfam": "IPv4", 00:16:51.599 "traddr": "192.168.100.8", 00:16:51.599 "trsvcid": "59437" 00:16:51.599 }, 00:16:51.599 "auth": { 00:16:51.599 "state": "completed", 00:16:51.599 "digest": "sha384", 00:16:51.599 "dhgroup": "ffdhe8192" 00:16:51.599 } 00:16:51.599 } 00:16:51.599 ]' 00:16:51.599 10:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.599 10:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.599 10:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.599 10:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.599 10:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.599 10:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.599 10:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.599 10:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.857 10:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:16:51.857 10:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:16:52.793 10:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.793 10:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:52.793 10:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.793 10:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.793 10:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.793 10:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.793 10:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:52.793 10:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:53.052 10:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:53.052 10:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.052 10:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:53.052 10:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:53.052 10:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:53.052 10:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.052 10:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:16:53.052 10:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.052 10:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.052 10:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.052 10:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:53.052 10:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.052 10:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.618 00:16:53.618 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.618 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.618 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.877 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.877 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.877 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.877 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.877 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.877 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.877 { 00:16:53.877 "cntlid": 95, 00:16:53.877 "qid": 0, 00:16:53.877 "state": "enabled", 00:16:53.877 "thread": "nvmf_tgt_poll_group_000", 00:16:53.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:53.877 "listen_address": { 00:16:53.877 "trtype": "RDMA", 00:16:53.877 "adrfam": "IPv4", 00:16:53.877 "traddr": "192.168.100.8", 00:16:53.877 "trsvcid": "4420" 00:16:53.877 }, 00:16:53.877 "peer_address": { 00:16:53.877 "trtype": "RDMA", 00:16:53.877 "adrfam": "IPv4", 00:16:53.877 "traddr": "192.168.100.8", 00:16:53.877 "trsvcid": "37974" 00:16:53.877 }, 00:16:53.877 "auth": { 00:16:53.877 "state": "completed", 00:16:53.877 "digest": "sha384", 00:16:53.877 "dhgroup": "ffdhe8192" 00:16:53.877 } 00:16:53.877 } 00:16:53.877 ]' 00:16:53.877 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.877 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.877 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.877 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:53.877 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.877 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.877 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.877 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.136 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:16:54.136 10:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:16:55.079 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.079 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:55.079 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.079 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.079 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.079 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:55.079 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.079 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.079 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:55.079 10:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:55.337 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:55.337 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.337 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:55.337 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:55.337 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:55.337 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.337 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.337 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.337 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.337 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.337 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.337 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.337 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.596 00:16:55.596 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.596 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.596 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.854 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.854 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.854 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.854 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.854 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.854 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.854 { 00:16:55.854 "cntlid": 97, 00:16:55.854 "qid": 0, 00:16:55.854 "state": "enabled", 00:16:55.854 "thread": "nvmf_tgt_poll_group_000", 00:16:55.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:55.854 "listen_address": { 00:16:55.854 "trtype": "RDMA", 00:16:55.854 "adrfam": "IPv4", 00:16:55.854 "traddr": "192.168.100.8", 00:16:55.854 "trsvcid": "4420" 00:16:55.854 }, 00:16:55.854 "peer_address": { 00:16:55.854 "trtype": "RDMA", 00:16:55.854 "adrfam": "IPv4", 00:16:55.854 "traddr": "192.168.100.8", 00:16:55.854 "trsvcid": "54268" 00:16:55.854 }, 00:16:55.854 "auth": { 00:16:55.854 "state": "completed", 00:16:55.854 "digest": "sha512", 00:16:55.854 "dhgroup": "null" 00:16:55.854 } 00:16:55.854 } 00:16:55.854 ]' 00:16:55.854 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.854 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.854 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.854 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:55.854 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.854 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.854 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.854 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.112 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:16:56.112 10:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:16:57.046 10:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.046 10:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:57.046 10:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.046 10:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.046 10:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.046 10:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.046 10:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:57.046 10:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:57.304 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:57.304 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.304 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.304 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:57.304 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:57.304 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.304 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.304 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.304 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.304 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.304 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.304 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.304 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.562 00:16:57.562 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.562 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.562 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.820 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.820 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.820 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.820 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.820 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.820 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.820 { 00:16:57.820 "cntlid": 99, 00:16:57.820 "qid": 0, 00:16:57.820 "state": "enabled", 00:16:57.820 "thread": "nvmf_tgt_poll_group_000", 00:16:57.821 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:57.821 "listen_address": { 00:16:57.821 "trtype": "RDMA", 00:16:57.821 "adrfam": "IPv4", 00:16:57.821 "traddr": "192.168.100.8", 00:16:57.821 "trsvcid": "4420" 00:16:57.821 }, 00:16:57.821 "peer_address": { 00:16:57.821 "trtype": "RDMA", 00:16:57.821 "adrfam": "IPv4", 00:16:57.821 "traddr": "192.168.100.8", 00:16:57.821 "trsvcid": "41600" 00:16:57.821 }, 00:16:57.821 "auth": { 00:16:57.821 "state": "completed", 00:16:57.821 "digest": "sha512", 00:16:57.821 "dhgroup": "null" 00:16:57.821 } 00:16:57.821 } 00:16:57.821 ]' 00:16:57.821 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.821 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.821 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.821 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:57.821 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.821 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.821 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.821 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.078 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:16:58.078 10:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:16:59.012 10:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.012 10:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:59.012 10:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.012 10:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.012 10:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.012 10:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.012 10:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:59.012 10:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:59.271 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:59.271 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.271 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.271 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:59.271 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:59.271 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.271 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.271 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.271 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.271 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.271 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.271 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.271 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.529 00:16:59.529 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.529 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.529 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.787 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.787 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.787 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.787 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.787 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.787 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.787 { 00:16:59.787 "cntlid": 101, 00:16:59.787 "qid": 0, 00:16:59.787 "state": "enabled", 00:16:59.787 "thread": "nvmf_tgt_poll_group_000", 00:16:59.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:16:59.787 "listen_address": { 00:16:59.787 "trtype": "RDMA", 00:16:59.787 "adrfam": "IPv4", 00:16:59.787 "traddr": "192.168.100.8", 00:16:59.787 "trsvcid": "4420" 00:16:59.787 }, 00:16:59.787 "peer_address": { 00:16:59.787 "trtype": "RDMA", 00:16:59.787 "adrfam": "IPv4", 00:16:59.787 "traddr": "192.168.100.8", 00:16:59.787 "trsvcid": "36031" 00:16:59.787 }, 00:16:59.787 "auth": { 00:16:59.787 "state": "completed", 00:16:59.787 "digest": "sha512", 00:16:59.787 "dhgroup": "null" 00:16:59.787 } 00:16:59.787 } 00:16:59.787 ]' 00:16:59.787 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.787 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.787 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.787 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:59.787 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.046 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.046 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.046 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.046 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:17:00.046 10:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:17:00.982 10:58:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.240 10:58:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:01.240 10:58:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.240 10:58:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.240 10:58:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.240 10:58:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.240 10:58:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:01.240 10:58:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:01.240 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:01.240 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.240 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:01.240 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:01.240 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:01.240 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.240 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:17:01.240 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.240 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.241 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.241 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:01.241 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.241 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.499 00:17:01.499 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.499 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.499 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.757 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.757 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.757 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.757 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.757 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.757 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.757 { 00:17:01.757 "cntlid": 103, 00:17:01.757 "qid": 0, 00:17:01.757 "state": "enabled", 00:17:01.757 "thread": "nvmf_tgt_poll_group_000", 00:17:01.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:01.757 "listen_address": { 00:17:01.757 "trtype": "RDMA", 00:17:01.757 "adrfam": "IPv4", 00:17:01.757 "traddr": "192.168.100.8", 00:17:01.757 "trsvcid": "4420" 00:17:01.757 }, 00:17:01.757 "peer_address": { 00:17:01.757 "trtype": "RDMA", 00:17:01.757 "adrfam": "IPv4", 00:17:01.757 "traddr": "192.168.100.8", 00:17:01.757 "trsvcid": "43621" 00:17:01.757 }, 00:17:01.757 "auth": { 00:17:01.757 "state": "completed", 00:17:01.757 "digest": "sha512", 00:17:01.757 "dhgroup": "null" 00:17:01.757 } 00:17:01.757 } 00:17:01.757 ]' 00:17:01.757 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.757 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.757 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.016 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:02.016 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.016 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.016 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.016 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.274 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:17:02.274 10:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:17:02.840 10:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.098 10:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:03.098 10:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.098 10:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.098 10:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.098 10:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.098 10:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.098 10:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:03.098 10:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:03.357 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:03.357 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.357 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.357 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:03.357 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:03.357 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.357 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.357 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.357 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.357 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.357 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.357 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.357 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.615 00:17:03.615 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.615 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.615 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.925 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.925 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.925 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.925 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.925 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.925 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.925 { 00:17:03.925 "cntlid": 105, 00:17:03.925 "qid": 0, 00:17:03.925 "state": "enabled", 00:17:03.925 "thread": "nvmf_tgt_poll_group_000", 00:17:03.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:03.925 "listen_address": { 00:17:03.925 "trtype": "RDMA", 00:17:03.925 "adrfam": "IPv4", 00:17:03.925 "traddr": "192.168.100.8", 00:17:03.925 "trsvcid": "4420" 00:17:03.925 }, 00:17:03.925 "peer_address": { 00:17:03.925 "trtype": "RDMA", 00:17:03.925 "adrfam": "IPv4", 00:17:03.925 "traddr": "192.168.100.8", 00:17:03.925 "trsvcid": "52415" 00:17:03.925 }, 00:17:03.925 "auth": { 00:17:03.925 "state": "completed", 00:17:03.925 "digest": "sha512", 00:17:03.925 "dhgroup": "ffdhe2048" 00:17:03.925 } 00:17:03.925 } 00:17:03.925 ]' 00:17:03.925 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.925 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.925 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.925 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:03.925 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.925 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.925 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.925 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.241 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:17:04.241 10:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:17:05.176 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.176 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:05.176 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.176 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.176 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.176 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.176 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:05.176 10:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:05.435 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:05.435 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.435 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.435 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:05.435 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:05.435 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.435 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.435 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.435 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.435 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.435 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.435 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.435 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.694 00:17:05.694 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.694 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.694 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.694 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.694 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.694 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.694 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.694 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.694 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.694 { 00:17:05.694 "cntlid": 107, 00:17:05.694 "qid": 0, 00:17:05.694 "state": "enabled", 00:17:05.694 "thread": "nvmf_tgt_poll_group_000", 00:17:05.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:05.694 "listen_address": { 00:17:05.694 "trtype": "RDMA", 00:17:05.694 "adrfam": "IPv4", 00:17:05.694 "traddr": "192.168.100.8", 00:17:05.694 "trsvcid": "4420" 00:17:05.694 }, 00:17:05.694 "peer_address": { 00:17:05.694 "trtype": "RDMA", 00:17:05.694 "adrfam": "IPv4", 00:17:05.694 "traddr": "192.168.100.8", 00:17:05.694 "trsvcid": "39469" 00:17:05.694 }, 00:17:05.694 "auth": { 00:17:05.694 "state": "completed", 00:17:05.694 "digest": "sha512", 00:17:05.694 "dhgroup": "ffdhe2048" 00:17:05.694 } 00:17:05.694 } 00:17:05.694 ]' 00:17:05.694 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.952 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.953 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.953 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:05.953 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.953 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.953 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.953 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.212 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:17:06.212 10:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:17:07.147 10:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.147 10:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:07.147 10:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.147 10:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.147 10:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.147 10:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.147 10:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:07.147 10:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:07.406 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:07.406 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.406 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.406 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:07.406 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:07.406 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.406 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.406 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.406 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.406 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.406 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.406 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.406 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.664 00:17:07.664 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.664 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.664 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.923 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.923 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.923 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.923 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.923 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.923 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.923 { 00:17:07.923 "cntlid": 109, 00:17:07.923 "qid": 0, 00:17:07.923 "state": "enabled", 00:17:07.923 "thread": "nvmf_tgt_poll_group_000", 00:17:07.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:07.923 "listen_address": { 00:17:07.923 "trtype": "RDMA", 00:17:07.923 "adrfam": "IPv4", 00:17:07.923 "traddr": "192.168.100.8", 00:17:07.923 "trsvcid": "4420" 00:17:07.923 }, 00:17:07.923 "peer_address": { 00:17:07.923 "trtype": "RDMA", 00:17:07.923 "adrfam": "IPv4", 00:17:07.923 "traddr": "192.168.100.8", 00:17:07.923 "trsvcid": "36678" 00:17:07.923 }, 00:17:07.923 "auth": { 00:17:07.923 "state": "completed", 00:17:07.923 "digest": "sha512", 00:17:07.923 "dhgroup": "ffdhe2048" 00:17:07.923 } 00:17:07.923 } 00:17:07.923 ]' 00:17:07.923 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.923 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.923 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.923 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:07.923 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.923 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.923 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.923 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.181 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:17:08.181 10:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:17:09.116 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.116 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:09.116 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.116 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.116 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.116 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.116 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:09.116 10:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:09.374 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:09.374 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.374 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.374 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:09.374 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:09.374 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.374 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:17:09.374 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.374 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.374 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.374 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:09.374 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.374 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.630 00:17:09.630 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.630 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.630 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.888 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.888 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.888 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.888 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.888 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.888 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.888 { 00:17:09.888 "cntlid": 111, 00:17:09.888 "qid": 0, 00:17:09.888 "state": "enabled", 00:17:09.888 "thread": "nvmf_tgt_poll_group_000", 00:17:09.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:09.888 "listen_address": { 00:17:09.888 "trtype": "RDMA", 00:17:09.888 "adrfam": "IPv4", 00:17:09.888 "traddr": "192.168.100.8", 00:17:09.888 "trsvcid": "4420" 00:17:09.888 }, 00:17:09.888 "peer_address": { 00:17:09.888 "trtype": "RDMA", 00:17:09.888 "adrfam": "IPv4", 00:17:09.888 "traddr": "192.168.100.8", 00:17:09.888 "trsvcid": "40954" 00:17:09.888 }, 00:17:09.888 "auth": { 00:17:09.888 "state": "completed", 00:17:09.888 "digest": "sha512", 00:17:09.888 "dhgroup": "ffdhe2048" 00:17:09.888 } 00:17:09.888 } 00:17:09.888 ]' 00:17:09.888 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.888 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.888 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.888 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.888 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.888 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.888 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.888 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.145 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:17:10.145 10:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:17:11.079 10:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.079 10:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:11.079 10:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.079 10:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.079 10:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.079 10:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.079 10:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.079 10:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:11.079 10:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:11.337 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:11.337 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.337 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:11.337 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:11.337 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:11.337 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.337 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.337 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.337 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.337 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.337 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.337 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.338 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.596 00:17:11.596 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.596 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.596 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.854 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.854 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.854 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.854 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.854 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.854 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.854 { 00:17:11.854 "cntlid": 113, 00:17:11.854 "qid": 0, 00:17:11.854 "state": "enabled", 00:17:11.854 "thread": "nvmf_tgt_poll_group_000", 00:17:11.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:11.854 "listen_address": { 00:17:11.854 "trtype": "RDMA", 00:17:11.854 "adrfam": "IPv4", 00:17:11.854 "traddr": "192.168.100.8", 00:17:11.854 "trsvcid": "4420" 00:17:11.854 }, 00:17:11.854 "peer_address": { 00:17:11.854 "trtype": "RDMA", 00:17:11.854 "adrfam": "IPv4", 00:17:11.854 "traddr": "192.168.100.8", 00:17:11.854 "trsvcid": "34549" 00:17:11.854 }, 00:17:11.854 "auth": { 00:17:11.854 "state": "completed", 00:17:11.854 "digest": "sha512", 00:17:11.854 "dhgroup": "ffdhe3072" 00:17:11.854 } 00:17:11.854 } 00:17:11.854 ]' 00:17:11.854 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.854 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.854 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.854 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:11.854 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.854 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.854 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.854 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.112 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:17:12.112 10:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:17:13.047 10:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.047 10:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:13.047 10:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.047 10:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.048 10:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.048 10:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.048 10:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:13.048 10:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:13.307 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:13.307 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.307 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:13.307 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:13.307 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:13.307 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.307 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.307 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.307 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.307 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.307 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.307 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.307 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.565 00:17:13.565 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.565 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.565 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.823 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.823 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.823 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.823 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.823 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.823 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.823 { 00:17:13.823 "cntlid": 115, 00:17:13.823 "qid": 0, 00:17:13.823 "state": "enabled", 00:17:13.823 "thread": "nvmf_tgt_poll_group_000", 00:17:13.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:13.823 "listen_address": { 00:17:13.823 "trtype": "RDMA", 00:17:13.823 "adrfam": "IPv4", 00:17:13.823 "traddr": "192.168.100.8", 00:17:13.823 "trsvcid": "4420" 00:17:13.823 }, 00:17:13.823 "peer_address": { 00:17:13.823 "trtype": "RDMA", 00:17:13.823 "adrfam": "IPv4", 00:17:13.823 "traddr": "192.168.100.8", 00:17:13.823 "trsvcid": "52076" 00:17:13.823 }, 00:17:13.823 "auth": { 00:17:13.823 "state": "completed", 00:17:13.823 "digest": "sha512", 00:17:13.823 "dhgroup": "ffdhe3072" 00:17:13.823 } 00:17:13.823 } 00:17:13.823 ]' 00:17:13.823 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.823 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.823 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.823 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:13.823 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.080 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.080 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.080 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.080 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:17:14.080 10:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:17:15.014 10:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.273 10:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:15.273 10:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.273 10:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.273 10:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.273 10:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.273 10:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:15.273 10:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:15.273 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:15.273 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.273 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:15.273 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:15.273 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:15.273 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.273 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.273 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.273 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.273 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.273 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.273 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.273 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.532 00:17:15.532 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.532 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.532 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.790 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.790 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.790 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.790 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.790 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.790 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.790 { 00:17:15.790 "cntlid": 117, 00:17:15.790 "qid": 0, 00:17:15.790 "state": "enabled", 00:17:15.790 "thread": "nvmf_tgt_poll_group_000", 00:17:15.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:15.790 "listen_address": { 00:17:15.790 "trtype": "RDMA", 00:17:15.790 "adrfam": "IPv4", 00:17:15.790 "traddr": "192.168.100.8", 00:17:15.790 "trsvcid": "4420" 00:17:15.790 }, 00:17:15.790 "peer_address": { 00:17:15.790 "trtype": "RDMA", 00:17:15.790 "adrfam": "IPv4", 00:17:15.790 "traddr": "192.168.100.8", 00:17:15.790 "trsvcid": "37828" 00:17:15.790 }, 00:17:15.790 "auth": { 00:17:15.790 "state": "completed", 00:17:15.790 "digest": "sha512", 00:17:15.790 "dhgroup": "ffdhe3072" 00:17:15.790 } 00:17:15.790 } 00:17:15.790 ]' 00:17:15.791 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.791 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.791 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.048 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:16.048 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.048 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.048 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.048 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.048 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:17:16.048 10:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:17:16.983 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.242 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:17.242 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.242 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.242 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.242 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.242 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:17.242 10:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:17.242 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:17.242 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.242 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:17.242 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:17.242 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:17.242 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.242 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:17:17.242 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.242 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.501 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.501 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:17.501 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.501 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.501 00:17:17.759 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.759 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.759 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.759 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.759 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.759 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.759 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.759 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.759 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.759 { 00:17:17.759 "cntlid": 119, 00:17:17.759 "qid": 0, 00:17:17.759 "state": "enabled", 00:17:17.759 "thread": "nvmf_tgt_poll_group_000", 00:17:17.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:17.759 "listen_address": { 00:17:17.759 "trtype": "RDMA", 00:17:17.759 "adrfam": "IPv4", 00:17:17.759 "traddr": "192.168.100.8", 00:17:17.759 "trsvcid": "4420" 00:17:17.759 }, 00:17:17.759 "peer_address": { 00:17:17.759 "trtype": "RDMA", 00:17:17.759 "adrfam": "IPv4", 00:17:17.759 "traddr": "192.168.100.8", 00:17:17.759 "trsvcid": "38596" 00:17:17.759 }, 00:17:17.759 "auth": { 00:17:17.759 "state": "completed", 00:17:17.759 "digest": "sha512", 00:17:17.759 "dhgroup": "ffdhe3072" 00:17:17.759 } 00:17:17.759 } 00:17:17.759 ]' 00:17:17.759 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.017 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.017 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.017 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:18.017 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.017 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.017 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.018 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.275 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:17:18.275 10:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:17:19.209 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.209 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:19.209 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.209 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.209 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.209 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.209 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.209 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:19.209 10:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:19.468 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:19.468 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.468 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:19.468 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:19.468 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:19.468 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.468 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.468 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.468 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.468 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.468 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.468 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.468 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.726 00:17:19.726 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.726 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.726 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.984 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.984 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.984 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.985 { 00:17:19.985 "cntlid": 121, 00:17:19.985 "qid": 0, 00:17:19.985 "state": "enabled", 00:17:19.985 "thread": "nvmf_tgt_poll_group_000", 00:17:19.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:19.985 "listen_address": { 00:17:19.985 "trtype": "RDMA", 00:17:19.985 "adrfam": "IPv4", 00:17:19.985 "traddr": "192.168.100.8", 00:17:19.985 "trsvcid": "4420" 00:17:19.985 }, 00:17:19.985 "peer_address": { 00:17:19.985 "trtype": "RDMA", 00:17:19.985 "adrfam": "IPv4", 00:17:19.985 "traddr": "192.168.100.8", 00:17:19.985 "trsvcid": "49900" 00:17:19.985 }, 00:17:19.985 "auth": { 00:17:19.985 "state": "completed", 00:17:19.985 "digest": "sha512", 00:17:19.985 "dhgroup": "ffdhe4096" 00:17:19.985 } 00:17:19.985 } 00:17:19.985 ]' 00:17:19.985 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.985 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.985 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.985 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:19.985 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.985 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.985 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.985 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.243 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:17:20.243 10:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:17:21.180 10:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.180 10:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:21.180 10:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.180 10:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.180 10:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.180 10:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.180 10:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:21.180 10:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:21.438 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:21.438 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.438 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:21.438 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:21.438 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:21.438 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.438 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.438 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.438 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.438 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.438 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.438 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.438 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.697 00:17:21.697 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.697 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.697 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.955 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.955 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.955 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.955 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.955 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.955 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.955 { 00:17:21.955 "cntlid": 123, 00:17:21.955 "qid": 0, 00:17:21.955 "state": "enabled", 00:17:21.955 "thread": "nvmf_tgt_poll_group_000", 00:17:21.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:21.955 "listen_address": { 00:17:21.955 "trtype": "RDMA", 00:17:21.955 "adrfam": "IPv4", 00:17:21.955 "traddr": "192.168.100.8", 00:17:21.955 "trsvcid": "4420" 00:17:21.955 }, 00:17:21.955 "peer_address": { 00:17:21.955 "trtype": "RDMA", 00:17:21.955 "adrfam": "IPv4", 00:17:21.955 "traddr": "192.168.100.8", 00:17:21.955 "trsvcid": "39571" 00:17:21.955 }, 00:17:21.955 "auth": { 00:17:21.955 "state": "completed", 00:17:21.955 "digest": "sha512", 00:17:21.955 "dhgroup": "ffdhe4096" 00:17:21.955 } 00:17:21.955 } 00:17:21.955 ]' 00:17:21.955 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.956 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.956 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.956 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:21.956 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.956 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.956 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.956 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.214 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:17:22.214 10:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:17:23.149 10:59:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.149 10:59:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:23.149 10:59:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.149 10:59:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.149 10:59:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.150 10:59:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.150 10:59:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:23.150 10:59:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:23.408 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:23.408 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.408 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:23.408 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:23.408 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:23.408 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.408 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.408 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.408 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.408 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.408 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.408 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.408 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.667 00:17:23.667 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.667 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.667 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.925 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.925 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.925 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.925 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.925 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.925 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.925 { 00:17:23.925 "cntlid": 125, 00:17:23.925 "qid": 0, 00:17:23.925 "state": "enabled", 00:17:23.925 "thread": "nvmf_tgt_poll_group_000", 00:17:23.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:23.925 "listen_address": { 00:17:23.925 "trtype": "RDMA", 00:17:23.925 "adrfam": "IPv4", 00:17:23.925 "traddr": "192.168.100.8", 00:17:23.925 "trsvcid": "4420" 00:17:23.925 }, 00:17:23.925 "peer_address": { 00:17:23.925 "trtype": "RDMA", 00:17:23.925 "adrfam": "IPv4", 00:17:23.925 "traddr": "192.168.100.8", 00:17:23.925 "trsvcid": "57750" 00:17:23.925 }, 00:17:23.925 "auth": { 00:17:23.925 "state": "completed", 00:17:23.925 "digest": "sha512", 00:17:23.925 "dhgroup": "ffdhe4096" 00:17:23.925 } 00:17:23.925 } 00:17:23.925 ]' 00:17:23.925 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.925 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.925 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.925 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:23.925 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.925 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.925 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.925 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.184 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:17:24.184 10:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:17:25.120 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.120 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:25.120 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.120 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.120 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.120 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.120 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:25.120 10:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:25.379 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:25.379 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.379 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:25.379 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:25.379 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:25.379 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.379 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:17:25.379 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.379 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.379 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.379 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:25.379 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.379 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.637 00:17:25.637 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.637 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.637 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.896 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.896 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.896 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.896 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.896 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.896 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.896 { 00:17:25.896 "cntlid": 127, 00:17:25.896 "qid": 0, 00:17:25.896 "state": "enabled", 00:17:25.896 "thread": "nvmf_tgt_poll_group_000", 00:17:25.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:25.896 "listen_address": { 00:17:25.896 "trtype": "RDMA", 00:17:25.896 "adrfam": "IPv4", 00:17:25.896 "traddr": "192.168.100.8", 00:17:25.896 "trsvcid": "4420" 00:17:25.896 }, 00:17:25.896 "peer_address": { 00:17:25.896 "trtype": "RDMA", 00:17:25.896 "adrfam": "IPv4", 00:17:25.896 "traddr": "192.168.100.8", 00:17:25.896 "trsvcid": "48439" 00:17:25.896 }, 00:17:25.896 "auth": { 00:17:25.896 "state": "completed", 00:17:25.896 "digest": "sha512", 00:17:25.896 "dhgroup": "ffdhe4096" 00:17:25.896 } 00:17:25.896 } 00:17:25.896 ]' 00:17:25.896 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.896 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.896 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.896 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.896 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.154 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.154 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.154 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.154 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:17:26.154 10:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:17:27.088 10:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.346 10:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:27.346 10:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.346 10:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.347 10:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.347 10:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.347 10:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.347 10:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:27.347 10:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:27.347 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:27.347 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.347 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:27.347 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:27.347 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:27.347 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.347 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.347 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.347 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.347 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.347 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.347 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.347 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.913 00:17:27.914 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.914 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.914 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.914 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.914 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.914 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.914 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.914 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.914 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.914 { 00:17:27.914 "cntlid": 129, 00:17:27.914 "qid": 0, 00:17:27.914 "state": "enabled", 00:17:27.914 "thread": "nvmf_tgt_poll_group_000", 00:17:27.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:27.914 "listen_address": { 00:17:27.914 "trtype": "RDMA", 00:17:27.914 "adrfam": "IPv4", 00:17:27.914 "traddr": "192.168.100.8", 00:17:27.914 "trsvcid": "4420" 00:17:27.914 }, 00:17:27.914 "peer_address": { 00:17:27.914 "trtype": "RDMA", 00:17:27.914 "adrfam": "IPv4", 00:17:27.914 "traddr": "192.168.100.8", 00:17:27.914 "trsvcid": "60065" 00:17:27.914 }, 00:17:27.914 "auth": { 00:17:27.914 "state": "completed", 00:17:27.914 "digest": "sha512", 00:17:27.914 "dhgroup": "ffdhe6144" 00:17:27.914 } 00:17:27.914 } 00:17:27.914 ]' 00:17:27.914 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.172 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.172 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.172 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:28.172 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.172 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.172 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.172 10:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.431 10:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:17:28.431 10:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:17:29.365 10:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.365 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:29.365 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.365 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.365 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.365 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.365 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:29.365 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:29.623 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:29.623 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.623 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:29.623 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:29.623 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:29.623 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.623 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.623 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.623 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.623 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.623 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.623 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.623 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.882 00:17:29.882 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.882 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.882 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.140 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.140 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.140 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.140 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.140 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.140 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.140 { 00:17:30.140 "cntlid": 131, 00:17:30.140 "qid": 0, 00:17:30.140 "state": "enabled", 00:17:30.140 "thread": "nvmf_tgt_poll_group_000", 00:17:30.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:30.140 "listen_address": { 00:17:30.140 "trtype": "RDMA", 00:17:30.140 "adrfam": "IPv4", 00:17:30.140 "traddr": "192.168.100.8", 00:17:30.140 "trsvcid": "4420" 00:17:30.140 }, 00:17:30.140 "peer_address": { 00:17:30.140 "trtype": "RDMA", 00:17:30.140 "adrfam": "IPv4", 00:17:30.140 "traddr": "192.168.100.8", 00:17:30.140 "trsvcid": "37276" 00:17:30.140 }, 00:17:30.140 "auth": { 00:17:30.140 "state": "completed", 00:17:30.140 "digest": "sha512", 00:17:30.140 "dhgroup": "ffdhe6144" 00:17:30.140 } 00:17:30.140 } 00:17:30.140 ]' 00:17:30.140 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.140 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.140 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.140 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:30.140 10:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.399 10:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.399 10:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.399 10:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.399 10:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:17:30.399 10:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:17:31.333 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.592 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:31.592 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.592 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.592 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.592 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.592 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:31.592 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:31.592 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:31.592 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.592 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:31.592 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:31.592 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:31.592 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.592 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.592 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.592 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.592 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.592 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.592 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.592 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.158 00:17:32.158 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.158 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.158 10:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.158 10:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.158 10:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.158 10:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.158 10:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.158 10:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.158 10:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.158 { 00:17:32.158 "cntlid": 133, 00:17:32.158 "qid": 0, 00:17:32.158 "state": "enabled", 00:17:32.158 "thread": "nvmf_tgt_poll_group_000", 00:17:32.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:32.158 "listen_address": { 00:17:32.158 "trtype": "RDMA", 00:17:32.158 "adrfam": "IPv4", 00:17:32.158 "traddr": "192.168.100.8", 00:17:32.158 "trsvcid": "4420" 00:17:32.158 }, 00:17:32.158 "peer_address": { 00:17:32.158 "trtype": "RDMA", 00:17:32.158 "adrfam": "IPv4", 00:17:32.158 "traddr": "192.168.100.8", 00:17:32.158 "trsvcid": "33304" 00:17:32.158 }, 00:17:32.158 "auth": { 00:17:32.158 "state": "completed", 00:17:32.158 "digest": "sha512", 00:17:32.158 "dhgroup": "ffdhe6144" 00:17:32.158 } 00:17:32.158 } 00:17:32.158 ]' 00:17:32.158 10:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.417 10:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.417 10:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.417 10:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:32.417 10:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.417 10:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.417 10:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.417 10:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.676 10:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:17:32.676 10:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:17:33.611 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.611 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:33.611 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.611 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.611 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.611 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.611 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:33.611 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:33.869 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:33.869 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.869 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.869 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:33.869 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:33.869 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.869 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:17:33.869 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.869 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.869 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.869 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:33.869 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.869 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.127 00:17:34.127 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.127 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.127 10:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.385 10:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.385 10:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.385 10:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.385 10:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.385 10:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.385 10:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.385 { 00:17:34.385 "cntlid": 135, 00:17:34.385 "qid": 0, 00:17:34.385 "state": "enabled", 00:17:34.386 "thread": "nvmf_tgt_poll_group_000", 00:17:34.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:34.386 "listen_address": { 00:17:34.386 "trtype": "RDMA", 00:17:34.386 "adrfam": "IPv4", 00:17:34.386 "traddr": "192.168.100.8", 00:17:34.386 "trsvcid": "4420" 00:17:34.386 }, 00:17:34.386 "peer_address": { 00:17:34.386 "trtype": "RDMA", 00:17:34.386 "adrfam": "IPv4", 00:17:34.386 "traddr": "192.168.100.8", 00:17:34.386 "trsvcid": "38126" 00:17:34.386 }, 00:17:34.386 "auth": { 00:17:34.386 "state": "completed", 00:17:34.386 "digest": "sha512", 00:17:34.386 "dhgroup": "ffdhe6144" 00:17:34.386 } 00:17:34.386 } 00:17:34.386 ]' 00:17:34.386 10:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.386 10:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.386 10:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.386 10:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:34.386 10:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.644 10:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.644 10:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.644 10:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.644 10:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:17:34.644 10:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:17:35.578 10:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.837 10:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:35.837 10:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.837 10:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.837 10:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.837 10:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.837 10:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.837 10:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:35.837 10:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:35.837 10:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:35.837 10:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.837 10:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:35.837 10:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:35.837 10:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:35.837 10:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.837 10:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.837 10:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.837 10:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.837 10:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.837 10:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.837 10:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.837 10:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.402 00:17:36.402 10:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.402 10:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.402 10:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.660 10:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.660 10:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.660 10:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.660 10:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.660 10:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.660 10:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.660 { 00:17:36.660 "cntlid": 137, 00:17:36.660 "qid": 0, 00:17:36.660 "state": "enabled", 00:17:36.660 "thread": "nvmf_tgt_poll_group_000", 00:17:36.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:36.660 "listen_address": { 00:17:36.660 "trtype": "RDMA", 00:17:36.660 "adrfam": "IPv4", 00:17:36.660 "traddr": "192.168.100.8", 00:17:36.660 "trsvcid": "4420" 00:17:36.660 }, 00:17:36.660 "peer_address": { 00:17:36.660 "trtype": "RDMA", 00:17:36.660 "adrfam": "IPv4", 00:17:36.660 "traddr": "192.168.100.8", 00:17:36.660 "trsvcid": "60970" 00:17:36.660 }, 00:17:36.660 "auth": { 00:17:36.660 "state": "completed", 00:17:36.660 "digest": "sha512", 00:17:36.660 "dhgroup": "ffdhe8192" 00:17:36.660 } 00:17:36.660 } 00:17:36.660 ]' 00:17:36.660 10:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.660 10:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.660 10:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.660 10:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.660 10:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.660 10:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.660 10:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.660 10:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.919 10:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:17:36.919 10:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:17:37.855 10:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.855 10:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:37.855 10:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.855 10:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.113 10:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.113 10:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.113 10:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:38.113 10:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:38.113 10:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:38.113 10:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.113 10:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:38.113 10:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:38.113 10:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:38.113 10:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.113 10:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.113 10:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.113 10:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.113 10:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.113 10:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.114 10:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.114 10:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.680 00:17:38.680 10:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.680 10:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.680 10:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.939 10:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.939 10:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.939 10:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.939 10:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.939 10:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.939 10:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.939 { 00:17:38.939 "cntlid": 139, 00:17:38.939 "qid": 0, 00:17:38.939 "state": "enabled", 00:17:38.939 "thread": "nvmf_tgt_poll_group_000", 00:17:38.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:38.939 "listen_address": { 00:17:38.939 "trtype": "RDMA", 00:17:38.939 "adrfam": "IPv4", 00:17:38.939 "traddr": "192.168.100.8", 00:17:38.939 "trsvcid": "4420" 00:17:38.939 }, 00:17:38.939 "peer_address": { 00:17:38.939 "trtype": "RDMA", 00:17:38.939 "adrfam": "IPv4", 00:17:38.939 "traddr": "192.168.100.8", 00:17:38.939 "trsvcid": "49842" 00:17:38.939 }, 00:17:38.939 "auth": { 00:17:38.939 "state": "completed", 00:17:38.939 "digest": "sha512", 00:17:38.939 "dhgroup": "ffdhe8192" 00:17:38.939 } 00:17:38.939 } 00:17:38.939 ]' 00:17:38.939 10:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.939 10:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.939 10:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.939 10:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.939 10:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.939 10:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.939 10:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.939 10:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.198 10:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:17:39.198 10:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: --dhchap-ctrl-secret DHHC-1:02:OTA2OWU1YzJiMWMyNjczM2I2OTI3MWFkMDFhNDkyZTUzNzAyOGZhM2I4YmRkYzRm0jXpgQ==: 00:17:40.133 10:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.133 10:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:40.133 10:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.133 10:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.133 10:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.133 10:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.133 10:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:40.133 10:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:40.392 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:40.392 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.392 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:40.392 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:40.392 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:40.392 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.392 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.392 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.392 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.392 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.392 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.392 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.392 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.959 00:17:40.959 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.959 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.959 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.218 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.218 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.218 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.218 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.218 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.218 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.218 { 00:17:41.218 "cntlid": 141, 00:17:41.218 "qid": 0, 00:17:41.218 "state": "enabled", 00:17:41.218 "thread": "nvmf_tgt_poll_group_000", 00:17:41.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:41.218 "listen_address": { 00:17:41.218 "trtype": "RDMA", 00:17:41.218 "adrfam": "IPv4", 00:17:41.218 "traddr": "192.168.100.8", 00:17:41.218 "trsvcid": "4420" 00:17:41.218 }, 00:17:41.218 "peer_address": { 00:17:41.218 "trtype": "RDMA", 00:17:41.218 "adrfam": "IPv4", 00:17:41.218 "traddr": "192.168.100.8", 00:17:41.218 "trsvcid": "57188" 00:17:41.218 }, 00:17:41.218 "auth": { 00:17:41.218 "state": "completed", 00:17:41.218 "digest": "sha512", 00:17:41.218 "dhgroup": "ffdhe8192" 00:17:41.218 } 00:17:41.218 } 00:17:41.218 ]' 00:17:41.218 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.219 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.219 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.219 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.219 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.219 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.219 10:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.219 10:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.477 10:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:17:41.477 10:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:01:NTYwOWMyZTIyYjUyMDdkNjBmODU3NWIwOWZlNzQ1NTQftt3E: 00:17:42.411 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.411 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:42.411 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.411 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.411 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.411 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.411 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:42.411 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:42.696 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:42.696 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.696 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.696 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:42.696 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:42.696 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.696 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:17:42.696 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.696 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.696 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.696 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.696 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.696 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.373 00:17:43.373 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.373 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.373 10:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.373 10:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.373 10:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.373 10:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.373 10:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.373 10:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.373 10:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.373 { 00:17:43.373 "cntlid": 143, 00:17:43.373 "qid": 0, 00:17:43.373 "state": "enabled", 00:17:43.373 "thread": "nvmf_tgt_poll_group_000", 00:17:43.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:43.373 "listen_address": { 00:17:43.373 "trtype": "RDMA", 00:17:43.373 "adrfam": "IPv4", 00:17:43.373 "traddr": "192.168.100.8", 00:17:43.373 "trsvcid": "4420" 00:17:43.373 }, 00:17:43.373 "peer_address": { 00:17:43.373 "trtype": "RDMA", 00:17:43.373 "adrfam": "IPv4", 00:17:43.373 "traddr": "192.168.100.8", 00:17:43.373 "trsvcid": "35751" 00:17:43.373 }, 00:17:43.373 "auth": { 00:17:43.373 "state": "completed", 00:17:43.373 "digest": "sha512", 00:17:43.373 "dhgroup": "ffdhe8192" 00:17:43.373 } 00:17:43.373 } 00:17:43.373 ]' 00:17:43.373 10:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.373 10:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.373 10:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.374 10:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:43.374 10:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.632 10:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.632 10:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.632 10:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.632 10:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:17:43.632 10:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:17:44.568 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.825 10:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.389 00:17:45.389 10:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.389 10:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.390 10:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.647 10:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.647 10:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.647 10:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.647 10:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.647 10:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.647 10:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.647 { 00:17:45.647 "cntlid": 145, 00:17:45.647 "qid": 0, 00:17:45.647 "state": "enabled", 00:17:45.647 "thread": "nvmf_tgt_poll_group_000", 00:17:45.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:45.647 "listen_address": { 00:17:45.647 "trtype": "RDMA", 00:17:45.647 "adrfam": "IPv4", 00:17:45.647 "traddr": "192.168.100.8", 00:17:45.647 "trsvcid": "4420" 00:17:45.647 }, 00:17:45.647 "peer_address": { 00:17:45.647 "trtype": "RDMA", 00:17:45.647 "adrfam": "IPv4", 00:17:45.647 "traddr": "192.168.100.8", 00:17:45.647 "trsvcid": "60842" 00:17:45.647 }, 00:17:45.647 "auth": { 00:17:45.647 "state": "completed", 00:17:45.647 "digest": "sha512", 00:17:45.647 "dhgroup": "ffdhe8192" 00:17:45.647 } 00:17:45.647 } 00:17:45.647 ]' 00:17:45.647 10:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.647 10:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.647 10:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.647 10:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:45.647 10:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.904 10:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.904 10:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.904 10:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.904 10:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:17:45.904 10:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWI2YWQwOGZhOGE3YzM1NmYxMTBkZmJjMWU5YWNiYzVhMzVlZDhiZWJlMzkzMWUzbOL3KQ==: --dhchap-ctrl-secret DHHC-1:03:YTA2NDk5ZDU4ZTkzNDY3YjlkMDNmMGQwZmRhZDQwOTg2MDU2MDZmZWVkYThiMzY2ZDMxNDRkM2EyZDg1N2ExNS6jUwU=: 00:17:46.837 10:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.095 10:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:47.095 10:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.095 10:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.095 10:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.095 10:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 00:17:47.095 10:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.095 10:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.095 10:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.095 10:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:47.095 10:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:47.095 10:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:47.095 10:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:47.095 10:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.095 10:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:47.095 10:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.095 10:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:47.095 10:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:47.095 10:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:47.661 request: 00:17:47.661 { 00:17:47.661 "name": "nvme0", 00:17:47.661 "trtype": "rdma", 00:17:47.661 "traddr": "192.168.100.8", 00:17:47.661 "adrfam": "ipv4", 00:17:47.661 "trsvcid": "4420", 00:17:47.661 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:47.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:47.661 "prchk_reftag": false, 00:17:47.661 "prchk_guard": false, 00:17:47.661 "hdgst": false, 00:17:47.661 "ddgst": false, 00:17:47.661 "dhchap_key": "key2", 00:17:47.661 "allow_unrecognized_csi": false, 00:17:47.661 "method": "bdev_nvme_attach_controller", 00:17:47.661 "req_id": 1 00:17:47.661 } 00:17:47.661 Got JSON-RPC error response 00:17:47.661 response: 00:17:47.661 { 00:17:47.661 "code": -5, 00:17:47.661 "message": "Input/output error" 00:17:47.661 } 00:17:47.661 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:47.661 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:47.661 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:47.661 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:47.661 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:47.661 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.661 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.661 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.661 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.661 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.661 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.661 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.661 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:47.661 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:47.661 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:47.661 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:47.661 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.661 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:47.661 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.661 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:47.661 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:47.661 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:47.920 request: 00:17:47.920 { 00:17:47.920 "name": "nvme0", 00:17:47.920 "trtype": "rdma", 00:17:47.920 "traddr": "192.168.100.8", 00:17:47.920 "adrfam": "ipv4", 00:17:47.920 "trsvcid": "4420", 00:17:47.920 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:47.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:47.920 "prchk_reftag": false, 00:17:47.920 "prchk_guard": false, 00:17:47.920 "hdgst": false, 00:17:47.920 "ddgst": false, 00:17:47.920 "dhchap_key": "key1", 00:17:47.920 "dhchap_ctrlr_key": "ckey2", 00:17:47.920 "allow_unrecognized_csi": false, 00:17:47.920 "method": "bdev_nvme_attach_controller", 00:17:47.920 "req_id": 1 00:17:47.920 } 00:17:47.920 Got JSON-RPC error response 00:17:47.920 response: 00:17:47.920 { 00:17:47.920 "code": -5, 00:17:47.920 "message": "Input/output error" 00:17:47.920 } 00:17:47.920 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:47.920 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:47.920 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:47.920 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:47.920 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:47.920 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.920 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.920 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.920 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 00:17:47.920 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.920 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.920 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.920 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.920 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:47.920 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.920 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:47.920 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.920 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:47.920 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.920 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.920 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.920 10:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.488 request: 00:17:48.488 { 00:17:48.488 "name": "nvme0", 00:17:48.488 "trtype": "rdma", 00:17:48.488 "traddr": "192.168.100.8", 00:17:48.488 "adrfam": "ipv4", 00:17:48.488 "trsvcid": "4420", 00:17:48.488 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:48.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:48.488 "prchk_reftag": false, 00:17:48.488 "prchk_guard": false, 00:17:48.488 "hdgst": false, 00:17:48.488 "ddgst": false, 00:17:48.488 "dhchap_key": "key1", 00:17:48.488 "dhchap_ctrlr_key": "ckey1", 00:17:48.488 "allow_unrecognized_csi": false, 00:17:48.488 "method": "bdev_nvme_attach_controller", 00:17:48.488 "req_id": 1 00:17:48.488 } 00:17:48.488 Got JSON-RPC error response 00:17:48.488 response: 00:17:48.488 { 00:17:48.488 "code": -5, 00:17:48.488 "message": "Input/output error" 00:17:48.488 } 00:17:48.488 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:48.488 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:48.488 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:48.488 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:48.488 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:48.488 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.488 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.488 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.488 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1430152 00:17:48.488 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 1430152 ']' 00:17:48.488 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 1430152 00:17:48.488 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:48.488 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:48.488 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1430152 00:17:48.488 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:48.488 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:48.488 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1430152' 00:17:48.488 killing process with pid 1430152 00:17:48.488 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 1430152 00:17:48.488 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 1430152 00:17:48.746 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:48.746 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:48.746 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:48.746 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.746 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1457518 00:17:48.746 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:48.746 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1457518 00:17:48.746 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1457518 ']' 00:17:48.747 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.747 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:48.747 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.747 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:48.747 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.005 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:49.005 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:49.005 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:49.005 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:49.005 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.005 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.005 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:49.005 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1457518 00:17:49.005 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1457518 ']' 00:17:49.005 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.005 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:49.005 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.005 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:49.005 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.264 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:49.264 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:49.264 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:49.264 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.264 10:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.264 null0 00:17:49.522 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.522 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:49.522 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fzD 00:17:49.522 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.522 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.522 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.522 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Uj3 ]] 00:17:49.522 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Uj3 00:17:49.522 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.522 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.522 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.522 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:49.522 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.2Oo 00:17:49.522 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.522 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.522 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.522 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Uza ]] 00:17:49.522 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Uza 00:17:49.522 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.522 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.522 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.yyp 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Cf4 ]] 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Cf4 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.kEk 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.523 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.457 nvme0n1 00:17:50.458 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.458 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.458 10:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.458 10:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.458 10:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.458 10:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.458 10:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.458 10:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.458 10:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.458 { 00:17:50.458 "cntlid": 1, 00:17:50.458 "qid": 0, 00:17:50.458 "state": "enabled", 00:17:50.458 "thread": "nvmf_tgt_poll_group_000", 00:17:50.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:50.458 "listen_address": { 00:17:50.458 "trtype": "RDMA", 00:17:50.458 "adrfam": "IPv4", 00:17:50.458 "traddr": "192.168.100.8", 00:17:50.458 "trsvcid": "4420" 00:17:50.458 }, 00:17:50.458 "peer_address": { 00:17:50.458 "trtype": "RDMA", 00:17:50.458 "adrfam": "IPv4", 00:17:50.458 "traddr": "192.168.100.8", 00:17:50.458 "trsvcid": "42686" 00:17:50.458 }, 00:17:50.458 "auth": { 00:17:50.458 "state": "completed", 00:17:50.458 "digest": "sha512", 00:17:50.458 "dhgroup": "ffdhe8192" 00:17:50.458 } 00:17:50.458 } 00:17:50.458 ]' 00:17:50.458 10:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.458 10:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.458 10:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.458 10:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:50.458 10:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.458 10:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.458 10:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.458 10:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.716 10:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:17:50.716 10:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:17:51.651 10:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.909 10:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:51.909 10:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.909 10:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.909 10:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.909 10:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:17:51.909 10:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.909 10:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.909 10:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.909 10:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:51.909 10:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:51.909 10:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:51.909 10:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:51.909 10:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:51.909 10:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:51.909 10:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:51.909 10:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:51.909 10:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:51.909 10:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:51.909 10:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.909 10:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:52.166 request: 00:17:52.166 { 00:17:52.166 "name": "nvme0", 00:17:52.166 "trtype": "rdma", 00:17:52.166 "traddr": "192.168.100.8", 00:17:52.166 "adrfam": "ipv4", 00:17:52.166 "trsvcid": "4420", 00:17:52.166 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:52.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:52.166 "prchk_reftag": false, 00:17:52.166 "prchk_guard": false, 00:17:52.166 "hdgst": false, 00:17:52.166 "ddgst": false, 00:17:52.166 "dhchap_key": "key3", 00:17:52.166 "allow_unrecognized_csi": false, 00:17:52.166 "method": "bdev_nvme_attach_controller", 00:17:52.166 "req_id": 1 00:17:52.166 } 00:17:52.166 Got JSON-RPC error response 00:17:52.166 response: 00:17:52.166 { 00:17:52.166 "code": -5, 00:17:52.166 "message": "Input/output error" 00:17:52.166 } 00:17:52.166 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:52.166 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:52.166 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:52.166 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:52.166 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:52.166 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:52.166 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:52.166 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:52.424 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:52.424 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:52.424 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:52.424 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:52.424 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.424 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:52.424 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.424 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:52.424 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:52.424 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:52.682 request: 00:17:52.682 { 00:17:52.682 "name": "nvme0", 00:17:52.682 "trtype": "rdma", 00:17:52.682 "traddr": "192.168.100.8", 00:17:52.682 "adrfam": "ipv4", 00:17:52.682 "trsvcid": "4420", 00:17:52.682 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:52.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:52.682 "prchk_reftag": false, 00:17:52.682 "prchk_guard": false, 00:17:52.682 "hdgst": false, 00:17:52.682 "ddgst": false, 00:17:52.682 "dhchap_key": "key3", 00:17:52.682 "allow_unrecognized_csi": false, 00:17:52.682 "method": "bdev_nvme_attach_controller", 00:17:52.682 "req_id": 1 00:17:52.682 } 00:17:52.682 Got JSON-RPC error response 00:17:52.682 response: 00:17:52.682 { 00:17:52.682 "code": -5, 00:17:52.682 "message": "Input/output error" 00:17:52.682 } 00:17:52.682 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:52.682 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:52.682 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:52.682 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:52.682 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:52.682 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:52.682 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:52.682 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:52.682 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:52.683 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:52.940 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:52.940 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.940 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.940 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.940 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:52.940 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.940 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.940 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.940 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:52.940 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:52.940 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:52.940 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:52.940 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.940 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:52.940 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.940 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:52.940 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:52.940 10:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:53.198 request: 00:17:53.198 { 00:17:53.198 "name": "nvme0", 00:17:53.198 "trtype": "rdma", 00:17:53.198 "traddr": "192.168.100.8", 00:17:53.198 "adrfam": "ipv4", 00:17:53.198 "trsvcid": "4420", 00:17:53.198 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:53.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:53.198 "prchk_reftag": false, 00:17:53.198 "prchk_guard": false, 00:17:53.198 "hdgst": false, 00:17:53.198 "ddgst": false, 00:17:53.198 "dhchap_key": "key0", 00:17:53.198 "dhchap_ctrlr_key": "key1", 00:17:53.198 "allow_unrecognized_csi": false, 00:17:53.198 "method": "bdev_nvme_attach_controller", 00:17:53.198 "req_id": 1 00:17:53.198 } 00:17:53.198 Got JSON-RPC error response 00:17:53.198 response: 00:17:53.198 { 00:17:53.198 "code": -5, 00:17:53.198 "message": "Input/output error" 00:17:53.198 } 00:17:53.198 10:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:53.198 10:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:53.198 10:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:53.198 10:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:53.198 10:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:53.198 10:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:53.198 10:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:53.456 nvme0n1 00:17:53.456 10:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:53.456 10:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:53.456 10:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.714 10:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.714 10:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.714 10:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.972 10:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 00:17:53.972 10:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.972 10:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.972 10:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.972 10:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:53.972 10:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:53.972 10:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:54.907 nvme0n1 00:17:54.907 10:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:54.907 10:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:54.907 10:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.907 10:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.907 10:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:54.907 10:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.907 10:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.907 10:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.907 10:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:54.907 10:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:54.907 10:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.166 10:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.166 10:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:17:55.166 10:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: --dhchap-ctrl-secret DHHC-1:03:NjNlNGM1ZTdkMzEzODBiOTk2ZjNlMTFjOGFmOGI3NzViMmVlNGNjMGZkMTllZjEyYzU2YmY5MzE2MTY1ZjJjYqZwdmA=: 00:17:56.101 10:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:56.101 10:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:56.101 10:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:56.101 10:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:56.101 10:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:56.101 10:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:56.101 10:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:56.101 10:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.101 10:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.101 10:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:56.101 10:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:56.101 10:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:56.101 10:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:56.101 10:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:56.101 10:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:56.101 10:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:56.101 10:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:56.101 10:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:56.101 10:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:56.667 request: 00:17:56.667 { 00:17:56.667 "name": "nvme0", 00:17:56.667 "trtype": "rdma", 00:17:56.667 "traddr": "192.168.100.8", 00:17:56.667 "adrfam": "ipv4", 00:17:56.667 "trsvcid": "4420", 00:17:56.667 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:56.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:17:56.667 "prchk_reftag": false, 00:17:56.667 "prchk_guard": false, 00:17:56.667 "hdgst": false, 00:17:56.667 "ddgst": false, 00:17:56.667 "dhchap_key": "key1", 00:17:56.667 "allow_unrecognized_csi": false, 00:17:56.667 "method": "bdev_nvme_attach_controller", 00:17:56.667 "req_id": 1 00:17:56.667 } 00:17:56.667 Got JSON-RPC error response 00:17:56.667 response: 00:17:56.667 { 00:17:56.667 "code": -5, 00:17:56.667 "message": "Input/output error" 00:17:56.667 } 00:17:56.667 10:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:56.667 10:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:56.667 10:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:56.667 10:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:56.667 10:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:56.667 10:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:56.667 10:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:57.234 nvme0n1 00:17:57.492 10:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:57.492 10:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:57.492 10:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.492 10:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.492 10:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.492 10:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.750 10:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:57.750 10:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.750 10:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.750 10:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.750 10:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:57.750 10:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:57.750 10:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:58.009 nvme0n1 00:17:58.009 10:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:58.009 10:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:58.009 10:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.267 10:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.267 10:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.267 10:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.525 10:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:58.525 10:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.525 10:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.525 10:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.525 10:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: '' 2s 00:17:58.525 10:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:58.525 10:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:58.525 10:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: 00:17:58.525 10:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:58.525 10:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:58.525 10:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:58.525 10:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: ]] 00:17:58.525 10:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YmMzYTc1NjE5YTA2ZWMxNzcxOGZhMjEwYWIxNTQ0YWGhWiRl: 00:17:58.525 10:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:58.525 10:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:58.525 10:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: 2s 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: ]] 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZGJjYWUyYmVjNzQ5OGU3MzUxYjBiYzhkZmYzYTNhYTE1NTUxM2YzY2E0NmI3NDZk7Re5wg==: 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:00.424 10:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:02.954 10:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:02.954 10:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:02.954 10:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:02.954 10:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:02.954 10:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:02.954 10:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:02.954 10:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:02.954 10:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.954 10:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:02.954 10:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.954 10:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.954 10:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.954 10:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:02.954 10:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:02.954 10:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:03.521 nvme0n1 00:18:03.521 10:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:03.521 10:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.521 10:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.521 10:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.521 10:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:03.521 10:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:04.088 10:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:04.088 10:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:04.088 10:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.346 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.346 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:04.346 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.346 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.346 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.346 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:04.346 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:04.346 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:04.346 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:04.346 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.604 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.604 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:04.604 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.604 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.604 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.604 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:04.604 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:04.604 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:04.604 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:04.604 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:04.604 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:04.604 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:04.604 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:04.604 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:05.183 request: 00:18:05.183 { 00:18:05.183 "name": "nvme0", 00:18:05.183 "dhchap_key": "key1", 00:18:05.183 "dhchap_ctrlr_key": "key3", 00:18:05.183 "method": "bdev_nvme_set_keys", 00:18:05.183 "req_id": 1 00:18:05.183 } 00:18:05.183 Got JSON-RPC error response 00:18:05.183 response: 00:18:05.183 { 00:18:05.183 "code": -13, 00:18:05.183 "message": "Permission denied" 00:18:05.183 } 00:18:05.183 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:05.183 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:05.183 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:05.183 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:05.183 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:05.183 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:05.183 10:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.441 10:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:05.441 10:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:06.376 10:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:06.376 10:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:06.376 10:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.635 10:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:06.636 10:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:06.636 10:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.636 10:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.636 10:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.636 10:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:06.636 10:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:06.636 10:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:07.200 nvme0n1 00:18:07.200 10:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:07.200 10:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.200 10:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.200 10:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.200 10:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:07.200 10:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:07.200 10:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:07.200 10:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:07.200 10:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:07.200 10:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:07.200 10:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:07.200 10:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:07.201 10:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:07.767 request: 00:18:07.767 { 00:18:07.767 "name": "nvme0", 00:18:07.767 "dhchap_key": "key2", 00:18:07.767 "dhchap_ctrlr_key": "key0", 00:18:07.767 "method": "bdev_nvme_set_keys", 00:18:07.767 "req_id": 1 00:18:07.767 } 00:18:07.767 Got JSON-RPC error response 00:18:07.767 response: 00:18:07.767 { 00:18:07.767 "code": -13, 00:18:07.767 "message": "Permission denied" 00:18:07.767 } 00:18:07.767 10:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:07.767 10:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:07.767 10:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:07.767 10:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:07.767 10:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:07.767 10:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:07.767 10:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.026 10:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:08.026 10:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:08.961 10:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:08.961 10:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:08.961 10:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.220 10:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:09.220 10:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:09.220 10:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:09.220 10:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1430186 00:18:09.220 10:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 1430186 ']' 00:18:09.220 10:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 1430186 00:18:09.220 10:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:09.220 10:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:09.220 10:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1430186 00:18:09.220 10:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:09.220 10:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:09.220 10:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1430186' 00:18:09.220 killing process with pid 1430186 00:18:09.220 10:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 1430186 00:18:09.220 10:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 1430186 00:18:09.478 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:09.478 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:09.478 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:09.478 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:09.478 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:09.478 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:09.479 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:09.479 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:09.479 rmmod nvme_rdma 00:18:09.479 rmmod nvme_fabrics 00:18:09.479 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:09.479 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:09.479 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:09.479 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1457518 ']' 00:18:09.479 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1457518 00:18:09.479 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 1457518 ']' 00:18:09.479 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 1457518 00:18:09.479 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:09.479 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:09.479 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1457518 00:18:09.479 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:09.479 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:09.479 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1457518' 00:18:09.479 killing process with pid 1457518 00:18:09.479 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 1457518 00:18:09.479 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 1457518 00:18:09.738 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:09.738 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:09.738 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.fzD /tmp/spdk.key-sha256.2Oo /tmp/spdk.key-sha384.yyp /tmp/spdk.key-sha512.kEk /tmp/spdk.key-sha512.Uj3 /tmp/spdk.key-sha384.Uza /tmp/spdk.key-sha256.Cf4 '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:18:09.738 00:18:09.738 real 3m2.746s 00:18:09.738 user 6m58.737s 00:18:09.738 sys 0m21.230s 00:18:09.738 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:09.738 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.738 ************************************ 00:18:09.738 END TEST nvmf_auth_target 00:18:09.738 ************************************ 00:18:09.738 10:59:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:18:09.738 10:59:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:18:09.738 10:59:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:18:09.738 10:59:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:18:09.738 10:59:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:18:09.738 10:59:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:18:09.738 10:59:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:09.738 10:59:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:09.738 10:59:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:09.998 ************************************ 00:18:09.998 START TEST nvmf_srq_overwhelm 00:18:09.998 ************************************ 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:18:09.998 * Looking for test storage... 00:18:09.998 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1691 -- # lcov --version 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:18:09.998 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:09.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.999 --rc genhtml_branch_coverage=1 00:18:09.999 --rc genhtml_function_coverage=1 00:18:09.999 --rc genhtml_legend=1 00:18:09.999 --rc geninfo_all_blocks=1 00:18:09.999 --rc geninfo_unexecuted_blocks=1 00:18:09.999 00:18:09.999 ' 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:09.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.999 --rc genhtml_branch_coverage=1 00:18:09.999 --rc genhtml_function_coverage=1 00:18:09.999 --rc genhtml_legend=1 00:18:09.999 --rc geninfo_all_blocks=1 00:18:09.999 --rc geninfo_unexecuted_blocks=1 00:18:09.999 00:18:09.999 ' 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:09.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.999 --rc genhtml_branch_coverage=1 00:18:09.999 --rc genhtml_function_coverage=1 00:18:09.999 --rc genhtml_legend=1 00:18:09.999 --rc geninfo_all_blocks=1 00:18:09.999 --rc geninfo_unexecuted_blocks=1 00:18:09.999 00:18:09.999 ' 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:09.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.999 --rc genhtml_branch_coverage=1 00:18:09.999 --rc genhtml_function_coverage=1 00:18:09.999 --rc genhtml_legend=1 00:18:09.999 --rc geninfo_all_blocks=1 00:18:09.999 --rc geninfo_unexecuted_blocks=1 00:18:09.999 00:18:09.999 ' 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:09.999 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:09.999 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:18:10.000 10:59:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:18:15.267 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:18:15.267 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:18:15.267 Found net devices under 0000:af:00.0: mlx_0_0 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:18:15.267 Found net devices under 0000:af:00.1: mlx_0_1 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # is_hw=yes 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # rdma_device_init 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:18:15.267 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:15.268 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:15.268 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:18:15.268 altname enp175s0f0np0 00:18:15.268 altname ens801f0np0 00:18:15.268 inet 192.168.100.8/24 scope global mlx_0_0 00:18:15.268 valid_lft forever preferred_lft forever 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:15.268 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:15.268 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:18:15.268 altname enp175s0f1np1 00:18:15.268 altname ens801f1np1 00:18:15.268 inet 192.168.100.9/24 scope global mlx_0_1 00:18:15.268 valid_lft forever preferred_lft forever 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # return 0 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:15.268 11:00:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:15.268 192.168.100.9' 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:15.268 192.168.100.9' 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # head -n 1 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:15.268 192.168.100.9' 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # tail -n +2 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # head -n 1 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:15.268 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # nvmfpid=1464250 00:18:15.269 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # waitforlisten 1464250 00:18:15.269 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:15.269 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@833 -- # '[' -z 1464250 ']' 00:18:15.269 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.269 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:15.269 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.269 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:15.269 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:15.269 [2024-11-15 11:00:04.112440] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:18:15.269 [2024-11-15 11:00:04.112493] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.526 [2024-11-15 11:00:04.177813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:15.526 [2024-11-15 11:00:04.220007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.527 [2024-11-15 11:00:04.220049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.527 [2024-11-15 11:00:04.220056] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.527 [2024-11-15 11:00:04.220062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.527 [2024-11-15 11:00:04.220067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.527 [2024-11-15 11:00:04.221699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.527 [2024-11-15 11:00:04.221798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.527 [2024-11-15 11:00:04.221883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:15.527 [2024-11-15 11:00:04.221885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.527 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:15.527 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@866 -- # return 0 00:18:15.527 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:15.527 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:15.527 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:15.527 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.527 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:18:15.527 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.527 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:15.527 [2024-11-15 11:00:04.392433] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1596230/0x159a720) succeed. 00:18:15.527 [2024-11-15 11:00:04.401805] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15978c0/0x15dbdc0) succeed. 00:18:15.785 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.785 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:18:15.785 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:15.785 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:18:15.785 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.785 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:15.785 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.785 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:15.785 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.785 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:15.785 Malloc0 00:18:15.785 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.785 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:18:15.785 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.785 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:15.785 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.785 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:15.785 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.785 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:15.785 [2024-11-15 11:00:04.501023] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:15.785 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.785 11:00:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:19.065 Malloc1 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.065 11:00:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:22.351 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:18:22.351 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:18:22.351 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:22.351 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme1n1 00:18:22.351 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:22.351 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme1n1 00:18:22.352 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:18:22.352 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:22.352 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:18:22.352 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.352 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:22.352 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.352 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:22.352 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.352 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:22.352 Malloc2 00:18:22.352 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.352 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:22.352 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.352 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:22.352 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.352 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:18:22.352 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.352 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:22.352 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.352 11:00:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:18:25.633 11:00:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:18:25.633 11:00:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:18:25.633 11:00:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:25.633 11:00:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme2n1 00:18:25.633 11:00:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:25.633 11:00:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme2n1 00:18:25.633 11:00:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:18:25.633 11:00:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:25.633 11:00:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:18:25.633 11:00:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.633 11:00:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:25.633 11:00:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.633 11:00:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:25.633 11:00:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.633 11:00:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:25.633 Malloc3 00:18:25.633 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.633 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:25.633 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.633 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:25.633 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.633 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:18:25.633 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.633 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:25.633 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.633 11:00:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme3n1 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme3n1 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:28.913 Malloc4 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.913 11:00:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:18:31.442 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:18:31.442 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:18:31.442 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:31.442 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme4n1 00:18:31.442 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:31.442 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme4n1 00:18:31.442 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:18:31.442 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:31.442 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:18:31.442 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.442 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:31.442 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.442 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:31.442 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.442 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:31.700 Malloc5 00:18:31.700 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.700 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:31.700 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.700 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:31.700 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.700 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:18:31.700 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.700 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:31.700 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.700 11:00:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:18:35.145 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:18:35.145 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:18:35.145 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:35.145 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme5n1 00:18:35.145 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme5n1 00:18:35.145 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:35.145 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:18:35.145 11:00:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:18:35.145 [global] 00:18:35.145 thread=1 00:18:35.145 invalidate=1 00:18:35.145 rw=read 00:18:35.145 time_based=1 00:18:35.145 runtime=10 00:18:35.145 ioengine=libaio 00:18:35.145 direct=1 00:18:35.145 bs=1048576 00:18:35.145 iodepth=128 00:18:35.145 norandommap=1 00:18:35.145 numjobs=13 00:18:35.145 00:18:35.145 [job0] 00:18:35.145 filename=/dev/nvme0n1 00:18:35.145 [job1] 00:18:35.145 filename=/dev/nvme1n1 00:18:35.145 [job2] 00:18:35.145 filename=/dev/nvme2n1 00:18:35.145 [job3] 00:18:35.145 filename=/dev/nvme3n1 00:18:35.145 [job4] 00:18:35.145 filename=/dev/nvme4n1 00:18:35.145 [job5] 00:18:35.145 filename=/dev/nvme5n1 00:18:35.145 Could not set queue depth (nvme0n1) 00:18:35.145 Could not set queue depth (nvme1n1) 00:18:35.145 Could not set queue depth (nvme2n1) 00:18:35.145 Could not set queue depth (nvme3n1) 00:18:35.145 Could not set queue depth (nvme4n1) 00:18:35.145 Could not set queue depth (nvme5n1) 00:18:35.145 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:35.145 ... 00:18:35.145 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:35.145 ... 00:18:35.145 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:35.145 ... 00:18:35.145 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:35.145 ... 00:18:35.145 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:35.145 ... 00:18:35.145 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:35.145 ... 00:18:35.145 fio-3.35 00:18:35.145 Starting 78 threads 00:18:47.355 00:18:47.355 job0: (groupid=0, jobs=1): err= 0: pid=1468161: Fri Nov 15 11:00:34 2024 00:18:47.355 read: IOPS=15, BW=15.6MiB/s (16.4MB/s)(157MiB/10058msec) 00:18:47.355 slat (usec): min=44, max=4228.5k, avg=63698.85, stdev=375357.74 00:18:47.355 clat (msec): min=56, max=9722, avg=1670.97, stdev=1881.02 00:18:47.355 lat (msec): min=59, max=9754, avg=1734.67, stdev=1986.20 00:18:47.355 clat percentiles (msec): 00:18:47.355 | 1.00th=[ 60], 5.00th=[ 110], 10.00th=[ 146], 20.00th=[ 368], 00:18:47.355 | 30.00th=[ 642], 40.00th=[ 953], 50.00th=[ 1401], 60.00th=[ 1703], 00:18:47.355 | 70.00th=[ 1938], 80.00th=[ 2400], 90.00th=[ 2735], 95.00th=[ 5336], 00:18:47.355 | 99.00th=[ 9731], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:18:47.355 | 99.99th=[ 9731] 00:18:47.355 bw ( KiB/s): min=60831, max=60831, per=1.65%, avg=60831.00, stdev= 0.00, samples=1 00:18:47.355 iops : min= 59, max= 59, avg=59.00, stdev= 0.00, samples=1 00:18:47.355 lat (msec) : 100=3.82%, 250=9.55%, 500=13.38%, 750=8.28%, 1000=6.37% 00:18:47.355 lat (msec) : 2000=29.30%, >=2000=29.30% 00:18:47.355 cpu : usr=0.01%, sys=0.77%, ctx=565, majf=0, minf=32769 00:18:47.355 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=5.1%, 16=10.2%, 32=20.4%, >=64=59.9% 00:18:47.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.355 complete : 0=0.0%, 4=96.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.2% 00:18:47.355 issued rwts: total=157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.355 job0: (groupid=0, jobs=1): err= 0: pid=1468162: Fri Nov 15 11:00:34 2024 00:18:47.355 read: IOPS=5, BW=5206KiB/s (5331kB/s)(55.0MiB/10819msec) 00:18:47.356 slat (usec): min=958, max=2122.6k, avg=196269.03, stdev=559490.28 00:18:47.356 clat (msec): min=24, max=10817, avg=5832.33, stdev=3950.96 00:18:47.356 lat (msec): min=1726, max=10818, avg=6028.60, stdev=3925.12 00:18:47.356 clat percentiles (msec): 00:18:47.356 | 1.00th=[ 24], 5.00th=[ 1770], 10.00th=[ 1804], 20.00th=[ 1888], 00:18:47.356 | 30.00th=[ 2005], 40.00th=[ 3842], 50.00th=[ 4077], 60.00th=[ 6342], 00:18:47.356 | 70.00th=[10671], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:18:47.356 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:18:47.356 | 99.99th=[10805] 00:18:47.356 lat (msec) : 50=1.82%, 2000=23.64%, >=2000=74.55% 00:18:47.356 cpu : usr=0.00%, sys=0.38%, ctx=195, majf=0, minf=14081 00:18:47.356 IO depths : 1=1.8%, 2=3.6%, 4=7.3%, 8=14.5%, 16=29.1%, 32=43.6%, >=64=0.0% 00:18:47.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.356 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:18:47.356 issued rwts: total=55,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.356 job0: (groupid=0, jobs=1): err= 0: pid=1468163: Fri Nov 15 11:00:34 2024 00:18:47.356 read: IOPS=35, BW=35.1MiB/s (36.8MB/s)(377MiB/10737msec) 00:18:47.356 slat (usec): min=43, max=2117.9k, avg=28411.56, stdev=192789.81 00:18:47.356 clat (msec): min=24, max=8555, avg=2719.20, stdev=2595.15 00:18:47.356 lat (msec): min=532, max=10597, avg=2747.61, stdev=2611.88 00:18:47.356 clat percentiles (msec): 00:18:47.356 | 1.00th=[ 550], 5.00th=[ 584], 10.00th=[ 609], 20.00th=[ 667], 00:18:47.356 | 30.00th=[ 735], 40.00th=[ 776], 50.00th=[ 860], 60.00th=[ 1687], 00:18:47.356 | 70.00th=[ 5067], 80.00th=[ 6409], 90.00th=[ 6611], 95.00th=[ 6678], 00:18:47.356 | 99.00th=[ 8423], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:18:47.356 | 99.99th=[ 8557] 00:18:47.356 bw ( KiB/s): min= 2048, max=192512, per=1.73%, avg=63744.00, stdev=80208.41, samples=8 00:18:47.356 iops : min= 2, max= 188, avg=62.25, stdev=78.33, samples=8 00:18:47.356 lat (msec) : 50=0.27%, 750=34.22%, 1000=20.69%, 2000=7.16%, >=2000=37.67% 00:18:47.356 cpu : usr=0.03%, sys=0.93%, ctx=520, majf=0, minf=32769 00:18:47.356 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.2%, 32=8.5%, >=64=83.3% 00:18:47.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.356 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:18:47.356 issued rwts: total=377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.356 job0: (groupid=0, jobs=1): err= 0: pid=1468164: Fri Nov 15 11:00:34 2024 00:18:47.356 read: IOPS=15, BW=16.0MiB/s (16.7MB/s)(173MiB/10838msec) 00:18:47.356 slat (usec): min=448, max=2127.9k, avg=57810.77, stdev=288297.62 00:18:47.356 clat (msec): min=835, max=9808, avg=4554.35, stdev=3501.53 00:18:47.356 lat (msec): min=837, max=10599, avg=4612.16, stdev=3521.88 00:18:47.356 clat percentiles (msec): 00:18:47.356 | 1.00th=[ 835], 5.00th=[ 919], 10.00th=[ 1062], 20.00th=[ 1318], 00:18:47.356 | 30.00th=[ 1603], 40.00th=[ 1938], 50.00th=[ 3473], 60.00th=[ 3842], 00:18:47.356 | 70.00th=[ 8490], 80.00th=[ 9463], 90.00th=[ 9731], 95.00th=[ 9731], 00:18:47.356 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:18:47.356 | 99.99th=[ 9866] 00:18:47.356 bw ( KiB/s): min=22528, max=71680, per=1.27%, avg=47104.00, stdev=34755.71, samples=2 00:18:47.356 iops : min= 22, max= 70, avg=46.00, stdev=33.94, samples=2 00:18:47.356 lat (msec) : 1000=8.09%, 2000=33.53%, >=2000=58.38% 00:18:47.356 cpu : usr=0.02%, sys=0.91%, ctx=443, majf=0, minf=32769 00:18:47.356 IO depths : 1=0.6%, 2=1.2%, 4=2.3%, 8=4.6%, 16=9.2%, 32=18.5%, >=64=63.6% 00:18:47.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.356 complete : 0=0.0%, 4=97.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.1% 00:18:47.356 issued rwts: total=173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.356 job0: (groupid=0, jobs=1): err= 0: pid=1468165: Fri Nov 15 11:00:34 2024 00:18:47.356 read: IOPS=6, BW=6190KiB/s (6339kB/s)(65.0MiB/10752msec) 00:18:47.356 slat (usec): min=500, max=2104.6k, avg=164960.35, stdev=557395.35 00:18:47.356 clat (msec): min=29, max=10749, avg=6680.25, stdev=3344.64 00:18:47.356 lat (msec): min=2049, max=10751, avg=6845.21, stdev=3275.18 00:18:47.356 clat percentiles (msec): 00:18:47.356 | 1.00th=[ 30], 5.00th=[ 2056], 10.00th=[ 2072], 20.00th=[ 2089], 00:18:47.356 | 30.00th=[ 4212], 40.00th=[ 6342], 50.00th=[ 6409], 60.00th=[ 8490], 00:18:47.356 | 70.00th=[ 8557], 80.00th=[10671], 90.00th=[10805], 95.00th=[10805], 00:18:47.356 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:18:47.356 | 99.99th=[10805] 00:18:47.356 lat (msec) : 50=1.54%, >=2000=98.46% 00:18:47.356 cpu : usr=0.00%, sys=0.49%, ctx=55, majf=0, minf=16641 00:18:47.356 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.3%, 16=24.6%, 32=49.2%, >=64=3.1% 00:18:47.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.356 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:18:47.356 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.356 job0: (groupid=0, jobs=1): err= 0: pid=1468166: Fri Nov 15 11:00:34 2024 00:18:47.356 read: IOPS=164, BW=165MiB/s (173MB/s)(1658MiB/10067msec) 00:18:47.356 slat (usec): min=36, max=175635, avg=6028.19, stdev=12738.27 00:18:47.356 clat (msec): min=61, max=2752, avg=732.54, stdev=509.73 00:18:47.356 lat (msec): min=80, max=2754, avg=738.57, stdev=512.51 00:18:47.356 clat percentiles (msec): 00:18:47.356 | 1.00th=[ 347], 5.00th=[ 376], 10.00th=[ 380], 20.00th=[ 384], 00:18:47.356 | 30.00th=[ 405], 40.00th=[ 514], 50.00th=[ 550], 60.00th=[ 634], 00:18:47.356 | 70.00th=[ 726], 80.00th=[ 852], 90.00th=[ 1351], 95.00th=[ 2089], 00:18:47.356 | 99.00th=[ 2567], 99.50th=[ 2635], 99.90th=[ 2769], 99.95th=[ 2769], 00:18:47.356 | 99.99th=[ 2769] 00:18:47.356 bw ( KiB/s): min=38912, max=339968, per=4.71%, avg=174193.78, stdev=107366.94, samples=18 00:18:47.356 iops : min= 38, max= 332, avg=170.11, stdev=104.85, samples=18 00:18:47.356 lat (msec) : 100=0.12%, 250=0.48%, 500=35.10%, 750=35.59%, 1000=11.70% 00:18:47.356 lat (msec) : 2000=11.76%, >=2000=5.25% 00:18:47.356 cpu : usr=0.09%, sys=2.65%, ctx=1725, majf=0, minf=32769 00:18:47.356 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:18:47.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.356 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.356 issued rwts: total=1658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.356 job0: (groupid=0, jobs=1): err= 0: pid=1468167: Fri Nov 15 11:00:34 2024 00:18:47.356 read: IOPS=8, BW=8352KiB/s (8553kB/s)(88.0MiB/10789msec) 00:18:47.356 slat (usec): min=366, max=2132.1k, avg=122375.69, stdev=455696.36 00:18:47.356 clat (msec): min=19, max=10782, avg=8476.82, stdev=1946.58 00:18:47.356 lat (msec): min=2046, max=10788, avg=8599.20, stdev=1735.90 00:18:47.356 clat percentiles (msec): 00:18:47.356 | 1.00th=[ 20], 5.00th=[ 4212], 10.00th=[ 7886], 20.00th=[ 8087], 00:18:47.356 | 30.00th=[ 8154], 40.00th=[ 8288], 50.00th=[ 8356], 60.00th=[ 8490], 00:18:47.356 | 70.00th=[ 8557], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:18:47.356 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:18:47.356 | 99.99th=[10805] 00:18:47.356 lat (msec) : 20=1.14%, >=2000=98.86% 00:18:47.356 cpu : usr=0.00%, sys=0.54%, ctx=142, majf=0, minf=22529 00:18:47.356 IO depths : 1=1.1%, 2=2.3%, 4=4.5%, 8=9.1%, 16=18.2%, 32=36.4%, >=64=28.4% 00:18:47.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.356 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:18:47.356 issued rwts: total=88,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.356 job0: (groupid=0, jobs=1): err= 0: pid=1468168: Fri Nov 15 11:00:34 2024 00:18:47.356 read: IOPS=3, BW=3529KiB/s (3614kB/s)(37.0MiB/10736msec) 00:18:47.356 slat (usec): min=797, max=2131.3k, avg=289490.05, stdev=701883.06 00:18:47.356 clat (msec): min=24, max=10728, avg=6302.73, stdev=3481.67 00:18:47.356 lat (msec): min=2072, max=10735, avg=6592.22, stdev=3389.19 00:18:47.356 clat percentiles (msec): 00:18:47.356 | 1.00th=[ 24], 5.00th=[ 2072], 10.00th=[ 2089], 20.00th=[ 3977], 00:18:47.356 | 30.00th=[ 4077], 40.00th=[ 4077], 50.00th=[ 4212], 60.00th=[ 6409], 00:18:47.356 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:18:47.356 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:18:47.356 | 99.99th=[10671] 00:18:47.356 lat (msec) : 50=2.70%, >=2000=97.30% 00:18:47.356 cpu : usr=0.00%, sys=0.20%, ctx=92, majf=0, minf=9473 00:18:47.356 IO depths : 1=2.7%, 2=5.4%, 4=10.8%, 8=21.6%, 16=43.2%, 32=16.2%, >=64=0.0% 00:18:47.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.356 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:18:47.356 issued rwts: total=37,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.356 job0: (groupid=0, jobs=1): err= 0: pid=1468169: Fri Nov 15 11:00:34 2024 00:18:47.356 read: IOPS=30, BW=30.8MiB/s (32.2MB/s)(310MiB/10080msec) 00:18:47.356 slat (usec): min=35, max=4243.6k, avg=32497.28, stdev=269934.78 00:18:47.356 clat (msec): min=4, max=9029, avg=1300.36, stdev=1943.72 00:18:47.356 lat (msec): min=81, max=9029, avg=1332.86, stdev=1991.14 00:18:47.356 clat percentiles (msec): 00:18:47.356 | 1.00th=[ 87], 5.00th=[ 105], 10.00th=[ 201], 20.00th=[ 397], 00:18:47.356 | 30.00th=[ 592], 40.00th=[ 709], 50.00th=[ 760], 60.00th=[ 785], 00:18:47.356 | 70.00th=[ 919], 80.00th=[ 1435], 90.00th=[ 1989], 95.00th=[ 8926], 00:18:47.356 | 99.00th=[ 8926], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:18:47.356 | 99.99th=[ 9060] 00:18:47.356 bw ( KiB/s): min=24576, max=186368, per=3.36%, avg=124245.33, stdev=87186.45, samples=3 00:18:47.356 iops : min= 24, max= 182, avg=121.33, stdev=85.14, samples=3 00:18:47.356 lat (msec) : 10=0.32%, 100=3.55%, 250=6.45%, 500=14.84%, 750=22.58% 00:18:47.356 lat (msec) : 1000=23.87%, 2000=18.39%, >=2000=10.00% 00:18:47.356 cpu : usr=0.05%, sys=0.84%, ctx=504, majf=0, minf=32769 00:18:47.357 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.3%, >=64=79.7% 00:18:47.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.357 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:18:47.357 issued rwts: total=310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.357 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.357 job0: (groupid=0, jobs=1): err= 0: pid=1468170: Fri Nov 15 11:00:34 2024 00:18:47.357 read: IOPS=4, BW=4144KiB/s (4244kB/s)(44.0MiB/10872msec) 00:18:47.357 slat (usec): min=1284, max=2137.0k, avg=246568.18, stdev=671244.40 00:18:47.357 clat (msec): min=22, max=10868, avg=9503.24, stdev=2797.48 00:18:47.357 lat (msec): min=2052, max=10871, avg=9749.81, stdev=2390.95 00:18:47.357 clat percentiles (msec): 00:18:47.357 | 1.00th=[ 22], 5.00th=[ 2072], 10.00th=[ 4245], 20.00th=[ 8490], 00:18:47.357 | 30.00th=[10671], 40.00th=[10805], 50.00th=[10805], 60.00th=[10805], 00:18:47.357 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:18:47.357 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:18:47.357 | 99.99th=[10805] 00:18:47.357 lat (msec) : 50=2.27%, >=2000=97.73% 00:18:47.357 cpu : usr=0.00%, sys=0.43%, ctx=109, majf=0, minf=11265 00:18:47.357 IO depths : 1=2.3%, 2=4.5%, 4=9.1%, 8=18.2%, 16=36.4%, 32=29.5%, >=64=0.0% 00:18:47.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.357 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:18:47.357 issued rwts: total=44,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.357 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.357 job0: (groupid=0, jobs=1): err= 0: pid=1468171: Fri Nov 15 11:00:34 2024 00:18:47.357 read: IOPS=119, BW=120MiB/s (126MB/s)(1210MiB/10095msec) 00:18:47.357 slat (usec): min=32, max=2083.8k, avg=8271.72, stdev=82723.74 00:18:47.357 clat (msec): min=79, max=5024, avg=635.26, stdev=474.79 00:18:47.357 lat (msec): min=101, max=5061, avg=643.53, stdev=492.15 00:18:47.357 clat percentiles (msec): 00:18:47.357 | 1.00th=[ 169], 5.00th=[ 397], 10.00th=[ 435], 20.00th=[ 468], 00:18:47.357 | 30.00th=[ 510], 40.00th=[ 531], 50.00th=[ 558], 60.00th=[ 592], 00:18:47.357 | 70.00th=[ 642], 80.00th=[ 693], 90.00th=[ 760], 95.00th=[ 810], 00:18:47.357 | 99.00th=[ 2869], 99.50th=[ 4933], 99.90th=[ 5000], 99.95th=[ 5000], 00:18:47.357 | 99.99th=[ 5000] 00:18:47.357 bw ( KiB/s): min=155091, max=317440, per=5.99%, avg=221296.30, stdev=53826.66, samples=10 00:18:47.357 iops : min= 151, max= 310, avg=216.00, stdev=52.68, samples=10 00:18:47.357 lat (msec) : 100=0.08%, 250=1.90%, 500=25.79%, 750=60.50%, 1000=9.42% 00:18:47.357 lat (msec) : >=2000=2.31% 00:18:47.357 cpu : usr=0.14%, sys=1.57%, ctx=1006, majf=0, minf=32769 00:18:47.357 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.8% 00:18:47.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.357 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.357 issued rwts: total=1210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.357 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.357 job0: (groupid=0, jobs=1): err= 0: pid=1468172: Fri Nov 15 11:00:34 2024 00:18:47.357 read: IOPS=8, BW=8207KiB/s (8403kB/s)(86.0MiB/10731msec) 00:18:47.357 slat (usec): min=553, max=2130.0k, avg=124539.66, stdev=459298.40 00:18:47.357 clat (msec): min=19, max=10634, avg=9431.08, stdev=2200.40 00:18:47.357 lat (msec): min=2059, max=10730, avg=9555.62, stdev=1950.34 00:18:47.357 clat percentiles (msec): 00:18:47.357 | 1.00th=[ 20], 5.00th=[ 4178], 10.00th=[ 6342], 20.00th=[ 9866], 00:18:47.357 | 30.00th=[ 9866], 40.00th=[10000], 50.00th=[10134], 60.00th=[10268], 00:18:47.357 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10537], 95.00th=[10671], 00:18:47.357 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:18:47.357 | 99.99th=[10671] 00:18:47.357 lat (msec) : 20=1.16%, >=2000=98.84% 00:18:47.357 cpu : usr=0.00%, sys=0.60%, ctx=186, majf=0, minf=22017 00:18:47.357 IO depths : 1=1.2%, 2=2.3%, 4=4.7%, 8=9.3%, 16=18.6%, 32=37.2%, >=64=26.7% 00:18:47.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.357 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:18:47.357 issued rwts: total=86,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.357 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.357 job0: (groupid=0, jobs=1): err= 0: pid=1468173: Fri Nov 15 11:00:34 2024 00:18:47.357 read: IOPS=5, BW=5750KiB/s (5888kB/s)(61.0MiB/10863msec) 00:18:47.357 slat (usec): min=677, max=2131.3k, avg=177758.76, stdev=552791.81 00:18:47.357 clat (msec): min=19, max=10861, avg=8965.34, stdev=3274.89 00:18:47.357 lat (msec): min=2046, max=10862, avg=9143.10, stdev=3069.03 00:18:47.357 clat percentiles (msec): 00:18:47.357 | 1.00th=[ 20], 5.00th=[ 2072], 10.00th=[ 3708], 20.00th=[ 4212], 00:18:47.357 | 30.00th=[10671], 40.00th=[10805], 50.00th=[10805], 60.00th=[10805], 00:18:47.357 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:18:47.357 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:18:47.357 | 99.99th=[10805] 00:18:47.357 lat (msec) : 20=1.64%, >=2000=98.36% 00:18:47.357 cpu : usr=0.01%, sys=0.54%, ctx=169, majf=0, minf=15617 00:18:47.357 IO depths : 1=1.6%, 2=3.3%, 4=6.6%, 8=13.1%, 16=26.2%, 32=49.2%, >=64=0.0% 00:18:47.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.357 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:18:47.357 issued rwts: total=61,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.357 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.357 job1: (groupid=0, jobs=1): err= 0: pid=1468174: Fri Nov 15 11:00:34 2024 00:18:47.357 read: IOPS=51, BW=51.4MiB/s (53.9MB/s)(553MiB/10757msec) 00:18:47.357 slat (usec): min=38, max=2131.4k, avg=19384.87, stdev=146983.77 00:18:47.357 clat (msec): min=33, max=6871, avg=2271.77, stdev=2344.92 00:18:47.357 lat (msec): min=504, max=6873, avg=2291.16, stdev=2348.20 00:18:47.357 clat percentiles (msec): 00:18:47.357 | 1.00th=[ 506], 5.00th=[ 506], 10.00th=[ 510], 20.00th=[ 510], 00:18:47.357 | 30.00th=[ 527], 40.00th=[ 676], 50.00th=[ 1401], 60.00th=[ 1703], 00:18:47.357 | 70.00th=[ 1921], 80.00th=[ 5134], 90.00th=[ 6611], 95.00th=[ 6745], 00:18:47.357 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6879], 99.95th=[ 6879], 00:18:47.357 | 99.99th=[ 6879] 00:18:47.357 bw ( KiB/s): min= 4096, max=253952, per=2.94%, avg=108800.00, stdev=97603.22, samples=8 00:18:47.357 iops : min= 4, max= 248, avg=106.25, stdev=95.32, samples=8 00:18:47.357 lat (msec) : 50=0.18%, 750=41.95%, 1000=2.35%, 2000=29.84%, >=2000=25.68% 00:18:47.357 cpu : usr=0.02%, sys=1.11%, ctx=1220, majf=0, minf=32769 00:18:47.357 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.8%, >=64=88.6% 00:18:47.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.357 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:18:47.357 issued rwts: total=553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.357 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.357 job1: (groupid=0, jobs=1): err= 0: pid=1468175: Fri Nov 15 11:00:34 2024 00:18:47.357 read: IOPS=21, BW=21.2MiB/s (22.2MB/s)(228MiB/10778msec) 00:18:47.357 slat (usec): min=59, max=2108.6k, avg=47126.82, stdev=268036.90 00:18:47.357 clat (msec): min=31, max=10148, avg=5775.39, stdev=3299.67 00:18:47.357 lat (msec): min=1092, max=10154, avg=5822.52, stdev=3288.33 00:18:47.357 clat percentiles (msec): 00:18:47.357 | 1.00th=[ 1167], 5.00th=[ 1284], 10.00th=[ 1452], 20.00th=[ 2022], 00:18:47.357 | 30.00th=[ 2089], 40.00th=[ 4212], 50.00th=[ 6141], 60.00th=[ 8154], 00:18:47.357 | 70.00th=[ 8490], 80.00th=[ 9329], 90.00th=[ 9731], 95.00th=[10000], 00:18:47.357 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:18:47.357 | 99.99th=[10134] 00:18:47.357 bw ( KiB/s): min=26624, max=47104, per=0.92%, avg=34133.33, stdev=7626.34, samples=6 00:18:47.357 iops : min= 26, max= 46, avg=33.33, stdev= 7.45, samples=6 00:18:47.357 lat (msec) : 50=0.44%, 2000=17.11%, >=2000=82.46% 00:18:47.357 cpu : usr=0.00%, sys=1.01%, ctx=333, majf=0, minf=32769 00:18:47.357 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.5%, 16=7.0%, 32=14.0%, >=64=72.4% 00:18:47.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.357 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:18:47.357 issued rwts: total=228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.357 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.357 job1: (groupid=0, jobs=1): err= 0: pid=1468176: Fri Nov 15 11:00:34 2024 00:18:47.357 read: IOPS=4, BW=4904KiB/s (5022kB/s)(52.0MiB/10857msec) 00:18:47.357 slat (usec): min=767, max=2118.9k, avg=208330.75, stdev=613032.33 00:18:47.357 clat (msec): min=23, max=10855, avg=9071.92, stdev=2969.10 00:18:47.357 lat (msec): min=2043, max=10856, avg=9280.25, stdev=2688.55 00:18:47.357 clat percentiles (msec): 00:18:47.357 | 1.00th=[ 24], 5.00th=[ 2056], 10.00th=[ 4212], 20.00th=[ 6409], 00:18:47.357 | 30.00th=[ 8557], 40.00th=[10537], 50.00th=[10537], 60.00th=[10805], 00:18:47.357 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:18:47.357 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:18:47.357 | 99.99th=[10805] 00:18:47.357 lat (msec) : 50=1.92%, >=2000=98.08% 00:18:47.357 cpu : usr=0.00%, sys=0.42%, ctx=113, majf=0, minf=13313 00:18:47.357 IO depths : 1=1.9%, 2=3.8%, 4=7.7%, 8=15.4%, 16=30.8%, 32=40.4%, >=64=0.0% 00:18:47.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.357 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:18:47.357 issued rwts: total=52,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.357 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.357 job1: (groupid=0, jobs=1): err= 0: pid=1468177: Fri Nov 15 11:00:34 2024 00:18:47.357 read: IOPS=14, BW=14.4MiB/s (15.1MB/s)(155MiB/10789msec) 00:18:47.357 slat (usec): min=1661, max=2102.0k, avg=69377.87, stdev=312100.24 00:18:47.357 clat (msec): min=33, max=10274, avg=7683.97, stdev=2377.56 00:18:47.357 lat (msec): min=2077, max=10280, avg=7753.34, stdev=2303.21 00:18:47.357 clat percentiles (msec): 00:18:47.357 | 1.00th=[ 2072], 5.00th=[ 3842], 10.00th=[ 3977], 20.00th=[ 4245], 00:18:47.358 | 30.00th=[ 6544], 40.00th=[ 8221], 50.00th=[ 8792], 60.00th=[ 9060], 00:18:47.358 | 70.00th=[ 9463], 80.00th=[ 9731], 90.00th=[10134], 95.00th=[10268], 00:18:47.358 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:18:47.358 | 99.99th=[10268] 00:18:47.358 bw ( KiB/s): min= 2048, max=45056, per=0.50%, avg=18432.00, stdev=23260.81, samples=3 00:18:47.358 iops : min= 2, max= 44, avg=18.00, stdev=22.72, samples=3 00:18:47.358 lat (msec) : 50=0.65%, >=2000=99.35% 00:18:47.358 cpu : usr=0.01%, sys=0.96%, ctx=575, majf=0, minf=32769 00:18:47.358 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=5.2%, 16=10.3%, 32=20.6%, >=64=59.4% 00:18:47.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.358 complete : 0=0.0%, 4=96.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.4% 00:18:47.358 issued rwts: total=155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.358 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.358 job1: (groupid=0, jobs=1): err= 0: pid=1468178: Fri Nov 15 11:00:34 2024 00:18:47.358 read: IOPS=17, BW=17.1MiB/s (17.9MB/s)(183MiB/10724msec) 00:18:47.358 slat (usec): min=47, max=2168.5k, avg=58464.34, stdev=304117.10 00:18:47.358 clat (msec): min=23, max=10084, avg=6954.31, stdev=3550.04 00:18:47.358 lat (msec): min=1360, max=10084, avg=7012.78, stdev=3515.03 00:18:47.358 clat percentiles (msec): 00:18:47.358 | 1.00th=[ 1334], 5.00th=[ 1452], 10.00th=[ 1519], 20.00th=[ 1586], 00:18:47.358 | 30.00th=[ 4212], 40.00th=[ 8658], 50.00th=[ 8926], 60.00th=[ 9329], 00:18:47.358 | 70.00th=[ 9597], 80.00th=[ 9731], 90.00th=[ 9866], 95.00th=[10000], 00:18:47.358 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:18:47.358 | 99.99th=[10134] 00:18:47.358 bw ( KiB/s): min= 2048, max=61440, per=0.44%, avg=16091.43, stdev=21198.81, samples=7 00:18:47.358 iops : min= 2, max= 60, avg=15.71, stdev=20.70, samples=7 00:18:47.358 lat (msec) : 50=0.55%, 2000=25.68%, >=2000=73.77% 00:18:47.358 cpu : usr=0.02%, sys=0.79%, ctx=466, majf=0, minf=32769 00:18:47.358 IO depths : 1=0.5%, 2=1.1%, 4=2.2%, 8=4.4%, 16=8.7%, 32=17.5%, >=64=65.6% 00:18:47.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.358 complete : 0=0.0%, 4=98.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.8% 00:18:47.358 issued rwts: total=183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.358 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.358 job1: (groupid=0, jobs=1): err= 0: pid=1468179: Fri Nov 15 11:00:34 2024 00:18:47.358 read: IOPS=33, BW=33.7MiB/s (35.4MB/s)(340MiB/10078msec) 00:18:47.358 slat (usec): min=37, max=2135.7k, avg=29424.99, stdev=177573.09 00:18:47.358 clat (msec): min=70, max=7880, avg=3275.64, stdev=2994.61 00:18:47.358 lat (msec): min=77, max=7900, avg=3305.07, stdev=3001.37 00:18:47.358 clat percentiles (msec): 00:18:47.358 | 1.00th=[ 83], 5.00th=[ 262], 10.00th=[ 363], 20.00th=[ 550], 00:18:47.358 | 30.00th=[ 743], 40.00th=[ 835], 50.00th=[ 2232], 60.00th=[ 2534], 00:18:47.358 | 70.00th=[ 6678], 80.00th=[ 6812], 90.00th=[ 7349], 95.00th=[ 7752], 00:18:47.358 | 99.00th=[ 7886], 99.50th=[ 7886], 99.90th=[ 7886], 99.95th=[ 7886], 00:18:47.358 | 99.99th=[ 7886] 00:18:47.358 bw ( KiB/s): min= 2048, max=165888, per=1.68%, avg=62233.71, stdev=68477.53, samples=7 00:18:47.358 iops : min= 2, max= 162, avg=60.71, stdev=66.79, samples=7 00:18:47.358 lat (msec) : 100=1.76%, 250=2.06%, 500=13.82%, 750=16.18%, 1000=12.94% 00:18:47.358 lat (msec) : 2000=0.59%, >=2000=52.65% 00:18:47.358 cpu : usr=0.05%, sys=0.97%, ctx=839, majf=0, minf=32769 00:18:47.358 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.7%, 32=9.4%, >=64=81.5% 00:18:47.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.358 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:18:47.358 issued rwts: total=340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.358 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.358 job1: (groupid=0, jobs=1): err= 0: pid=1468180: Fri Nov 15 11:00:34 2024 00:18:47.358 read: IOPS=16, BW=16.0MiB/s (16.8MB/s)(174MiB/10850msec) 00:18:47.358 slat (usec): min=1444, max=2104.5k, avg=62154.40, stdev=299315.96 00:18:47.358 clat (msec): min=33, max=9781, avg=7071.13, stdev=3090.25 00:18:47.358 lat (msec): min=1764, max=9803, avg=7133.28, stdev=3043.34 00:18:47.358 clat percentiles (msec): 00:18:47.358 | 1.00th=[ 1770], 5.00th=[ 1804], 10.00th=[ 1905], 20.00th=[ 2022], 00:18:47.358 | 30.00th=[ 6342], 40.00th=[ 8658], 50.00th=[ 8792], 60.00th=[ 8926], 00:18:47.358 | 70.00th=[ 9194], 80.00th=[ 9329], 90.00th=[ 9597], 95.00th=[ 9731], 00:18:47.358 | 99.00th=[ 9731], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:18:47.358 | 99.99th=[ 9731] 00:18:47.358 bw ( KiB/s): min= 2048, max=49152, per=0.36%, avg=13458.29, stdev=16589.94, samples=7 00:18:47.358 iops : min= 2, max= 48, avg=13.14, stdev=16.20, samples=7 00:18:47.358 lat (msec) : 50=0.57%, 2000=16.09%, >=2000=83.33% 00:18:47.358 cpu : usr=0.00%, sys=0.85%, ctx=724, majf=0, minf=32769 00:18:47.358 IO depths : 1=0.6%, 2=1.1%, 4=2.3%, 8=4.6%, 16=9.2%, 32=18.4%, >=64=63.8% 00:18:47.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.358 complete : 0=0.0%, 4=97.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.1% 00:18:47.358 issued rwts: total=174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.358 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.358 job1: (groupid=0, jobs=1): err= 0: pid=1468181: Fri Nov 15 11:00:34 2024 00:18:47.358 read: IOPS=18, BW=18.8MiB/s (19.7MB/s)(202MiB/10732msec) 00:18:47.358 slat (usec): min=434, max=2112.7k, avg=52956.51, stdev=279252.06 00:18:47.358 clat (msec): min=33, max=10721, avg=6292.05, stdev=3644.10 00:18:47.358 lat (msec): min=1349, max=10721, avg=6345.00, stdev=3620.33 00:18:47.358 clat percentiles (msec): 00:18:47.358 | 1.00th=[ 1351], 5.00th=[ 1435], 10.00th=[ 1452], 20.00th=[ 1502], 00:18:47.358 | 30.00th=[ 1670], 40.00th=[ 7752], 50.00th=[ 8658], 60.00th=[ 8926], 00:18:47.358 | 70.00th=[ 9194], 80.00th=[ 9329], 90.00th=[ 9597], 95.00th=[ 9731], 00:18:47.358 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:18:47.358 | 99.99th=[10671] 00:18:47.358 bw ( KiB/s): min= 2048, max=90112, per=0.59%, avg=21650.29, stdev=31922.03, samples=7 00:18:47.358 iops : min= 2, max= 88, avg=21.14, stdev=31.17, samples=7 00:18:47.358 lat (msec) : 50=0.50%, 2000=32.67%, >=2000=66.83% 00:18:47.358 cpu : usr=0.00%, sys=0.93%, ctx=794, majf=0, minf=32769 00:18:47.358 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=7.9%, 32=15.8%, >=64=68.8% 00:18:47.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.358 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:18:47.358 issued rwts: total=202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.358 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.358 job1: (groupid=0, jobs=1): err= 0: pid=1468182: Fri Nov 15 11:00:34 2024 00:18:47.358 read: IOPS=10, BW=10.8MiB/s (11.3MB/s)(116MiB/10763msec) 00:18:47.358 slat (usec): min=741, max=2123.4k, avg=92508.33, stdev=377654.29 00:18:47.358 clat (msec): min=31, max=10732, avg=9213.55, stdev=1833.38 00:18:47.358 lat (msec): min=2050, max=10762, avg=9306.06, stdev=1624.92 00:18:47.358 clat percentiles (msec): 00:18:47.358 | 1.00th=[ 2056], 5.00th=[ 4245], 10.00th=[ 8490], 20.00th=[ 8792], 00:18:47.358 | 30.00th=[ 9060], 40.00th=[ 9329], 50.00th=[ 9463], 60.00th=[ 9866], 00:18:47.358 | 70.00th=[10134], 80.00th=[10402], 90.00th=[10671], 95.00th=[10671], 00:18:47.358 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:18:47.358 | 99.99th=[10671] 00:18:47.358 lat (msec) : 50=0.86%, >=2000=99.14% 00:18:47.358 cpu : usr=0.01%, sys=0.85%, ctx=423, majf=0, minf=29697 00:18:47.358 IO depths : 1=0.9%, 2=1.7%, 4=3.4%, 8=6.9%, 16=13.8%, 32=27.6%, >=64=45.7% 00:18:47.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.358 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:18:47.358 issued rwts: total=116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.358 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.358 job1: (groupid=0, jobs=1): err= 0: pid=1468183: Fri Nov 15 11:00:34 2024 00:18:47.358 read: IOPS=14, BW=14.8MiB/s (15.6MB/s)(160MiB/10776msec) 00:18:47.358 slat (msec): min=2, max=2101, avg=67.34, stdev=312.88 00:18:47.358 clat (usec): min=815, max=9724.7k, avg=7499586.31, stdev=2841363.31 00:18:47.358 lat (msec): min=1733, max=9736, avg=7566.92, stdev=2774.69 00:18:47.358 clat percentiles (msec): 00:18:47.358 | 1.00th=[ 1703], 5.00th=[ 1770], 10.00th=[ 1905], 20.00th=[ 3775], 00:18:47.358 | 30.00th=[ 8020], 40.00th=[ 8658], 50.00th=[ 8926], 60.00th=[ 9060], 00:18:47.358 | 70.00th=[ 9194], 80.00th=[ 9329], 90.00th=[ 9463], 95.00th=[ 9597], 00:18:47.358 | 99.00th=[ 9731], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:18:47.358 | 99.99th=[ 9731] 00:18:47.358 bw ( KiB/s): min= 2048, max=26624, per=0.35%, avg=13107.20, stdev=10798.22, samples=5 00:18:47.358 iops : min= 2, max= 26, avg=12.80, stdev=10.55, samples=5 00:18:47.358 lat (usec) : 1000=0.62% 00:18:47.358 lat (msec) : 2000=11.88%, >=2000=87.50% 00:18:47.358 cpu : usr=0.01%, sys=0.71%, ctx=734, majf=0, minf=32769 00:18:47.358 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=5.0%, 16=10.0%, 32=20.0%, >=64=60.6% 00:18:47.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.358 complete : 0=0.0%, 4=97.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.9% 00:18:47.358 issued rwts: total=160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.358 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.358 job1: (groupid=0, jobs=1): err= 0: pid=1468184: Fri Nov 15 11:00:34 2024 00:18:47.358 read: IOPS=14, BW=14.4MiB/s (15.1MB/s)(155MiB/10788msec) 00:18:47.358 slat (usec): min=878, max=2108.1k, avg=69583.95, stdev=307163.40 00:18:47.358 clat (usec): min=955, max=10449k, avg=7907276.67, stdev=2678682.92 00:18:47.358 lat (msec): min=1814, max=10458, avg=7976.86, stdev=2608.37 00:18:47.358 clat percentiles (msec): 00:18:47.358 | 1.00th=[ 1821], 5.00th=[ 2056], 10.00th=[ 2433], 20.00th=[ 6342], 00:18:47.358 | 30.00th=[ 8557], 40.00th=[ 8658], 50.00th=[ 8792], 60.00th=[ 8926], 00:18:47.358 | 70.00th=[ 9463], 80.00th=[ 9731], 90.00th=[10000], 95.00th=[10268], 00:18:47.359 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:18:47.359 | 99.99th=[10402] 00:18:47.359 bw ( KiB/s): min= 2043, max=16384, per=0.25%, avg=9209.67, stdev=7177.04, samples=6 00:18:47.359 iops : min= 1, max= 16, avg= 8.67, stdev= 7.03, samples=6 00:18:47.359 lat (usec) : 1000=0.65% 00:18:47.359 lat (msec) : 2000=3.23%, >=2000=96.13% 00:18:47.359 cpu : usr=0.00%, sys=0.94%, ctx=652, majf=0, minf=32769 00:18:47.359 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=5.2%, 16=10.3%, 32=20.6%, >=64=59.4% 00:18:47.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.359 complete : 0=0.0%, 4=96.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.4% 00:18:47.359 issued rwts: total=155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.359 job1: (groupid=0, jobs=1): err= 0: pid=1468185: Fri Nov 15 11:00:34 2024 00:18:47.359 read: IOPS=28, BW=28.9MiB/s (30.3MB/s)(312MiB/10793msec) 00:18:47.359 slat (usec): min=39, max=2110.1k, avg=34474.65, stdev=217892.26 00:18:47.359 clat (msec): min=35, max=8962, avg=4100.95, stdev=3405.81 00:18:47.359 lat (msec): min=890, max=8998, avg=4135.42, stdev=3405.78 00:18:47.359 clat percentiles (msec): 00:18:47.359 | 1.00th=[ 885], 5.00th=[ 944], 10.00th=[ 961], 20.00th=[ 1045], 00:18:47.359 | 30.00th=[ 1150], 40.00th=[ 1385], 50.00th=[ 1955], 60.00th=[ 6342], 00:18:47.359 | 70.00th=[ 7148], 80.00th=[ 8557], 90.00th=[ 8792], 95.00th=[ 8792], 00:18:47.359 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:18:47.359 | 99.99th=[ 8926] 00:18:47.359 bw ( KiB/s): min= 4096, max=137216, per=1.46%, avg=53833.14, stdev=49473.99, samples=7 00:18:47.359 iops : min= 4, max= 134, avg=52.57, stdev=48.31, samples=7 00:18:47.359 lat (msec) : 50=0.32%, 1000=11.86%, 2000=37.82%, >=2000=50.00% 00:18:47.359 cpu : usr=0.01%, sys=0.97%, ctx=482, majf=0, minf=32769 00:18:47.359 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.1%, 32=10.3%, >=64=79.8% 00:18:47.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.359 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:18:47.359 issued rwts: total=312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.359 job1: (groupid=0, jobs=1): err= 0: pid=1468186: Fri Nov 15 11:00:34 2024 00:18:47.359 read: IOPS=136, BW=137MiB/s (143MB/s)(1373MiB/10046msec) 00:18:47.359 slat (usec): min=33, max=2080.6k, avg=7288.28, stdev=57430.70 00:18:47.359 clat (msec): min=31, max=3391, avg=843.28, stdev=758.42 00:18:47.359 lat (msec): min=59, max=3398, avg=850.57, stdev=761.63 00:18:47.359 clat percentiles (msec): 00:18:47.359 | 1.00th=[ 176], 5.00th=[ 380], 10.00th=[ 380], 20.00th=[ 384], 00:18:47.359 | 30.00th=[ 388], 40.00th=[ 397], 50.00th=[ 625], 60.00th=[ 726], 00:18:47.359 | 70.00th=[ 844], 80.00th=[ 1036], 90.00th=[ 1267], 95.00th=[ 3071], 00:18:47.359 | 99.00th=[ 3339], 99.50th=[ 3339], 99.90th=[ 3339], 99.95th=[ 3406], 00:18:47.359 | 99.99th=[ 3406] 00:18:47.359 bw ( KiB/s): min=10240, max=346112, per=4.66%, avg=172032.00, stdev=111324.75, samples=14 00:18:47.359 iops : min= 10, max= 338, avg=168.00, stdev=108.72, samples=14 00:18:47.359 lat (msec) : 50=0.07%, 100=0.51%, 250=0.95%, 500=43.92%, 750=15.37% 00:18:47.359 lat (msec) : 1000=17.41%, 2000=12.53%, >=2000=9.25% 00:18:47.359 cpu : usr=0.11%, sys=1.98%, ctx=1347, majf=0, minf=32769 00:18:47.359 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:18:47.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.359 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.359 issued rwts: total=1373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.359 job2: (groupid=0, jobs=1): err= 0: pid=1468187: Fri Nov 15 11:00:34 2024 00:18:47.359 read: IOPS=27, BW=27.5MiB/s (28.8MB/s)(277MiB/10079msec) 00:18:47.359 slat (usec): min=31, max=2094.5k, avg=36105.98, stdev=211119.55 00:18:47.359 clat (msec): min=75, max=8081, avg=1819.69, stdev=1741.39 00:18:47.359 lat (msec): min=82, max=8095, avg=1855.79, stdev=1779.62 00:18:47.359 clat percentiles (msec): 00:18:47.359 | 1.00th=[ 88], 5.00th=[ 300], 10.00th=[ 651], 20.00th=[ 1167], 00:18:47.359 | 30.00th=[ 1267], 40.00th=[ 1368], 50.00th=[ 1452], 60.00th=[ 1586], 00:18:47.359 | 70.00th=[ 1653], 80.00th=[ 1720], 90.00th=[ 1804], 95.00th=[ 8020], 00:18:47.359 | 99.00th=[ 8087], 99.50th=[ 8087], 99.90th=[ 8087], 99.95th=[ 8087], 00:18:47.359 | 99.99th=[ 8087] 00:18:47.359 bw ( KiB/s): min=44876, max=157696, per=2.08%, avg=76755.00, stdev=54078.19, samples=4 00:18:47.359 iops : min= 43, max= 154, avg=74.75, stdev=52.97, samples=4 00:18:47.359 lat (msec) : 100=1.44%, 250=2.53%, 500=3.25%, 750=5.42%, 1000=3.25% 00:18:47.359 lat (msec) : 2000=74.73%, >=2000=9.39% 00:18:47.359 cpu : usr=0.05%, sys=0.99%, ctx=572, majf=0, minf=32769 00:18:47.359 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.9%, 16=5.8%, 32=11.6%, >=64=77.3% 00:18:47.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.359 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:18:47.359 issued rwts: total=277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.359 job2: (groupid=0, jobs=1): err= 0: pid=1468188: Fri Nov 15 11:00:34 2024 00:18:47.359 read: IOPS=29, BW=29.3MiB/s (30.7MB/s)(296MiB/10110msec) 00:18:47.359 slat (usec): min=92, max=2092.4k, avg=33784.53, stdev=207069.06 00:18:47.359 clat (msec): min=107, max=8420, avg=4112.09, stdev=3240.62 00:18:47.359 lat (msec): min=206, max=8428, avg=4145.87, stdev=3240.64 00:18:47.359 clat percentiles (msec): 00:18:47.359 | 1.00th=[ 215], 5.00th=[ 531], 10.00th=[ 953], 20.00th=[ 1351], 00:18:47.359 | 30.00th=[ 1368], 40.00th=[ 1385], 50.00th=[ 1469], 60.00th=[ 5604], 00:18:47.359 | 70.00th=[ 7819], 80.00th=[ 8020], 90.00th=[ 8221], 95.00th=[ 8288], 00:18:47.359 | 99.00th=[ 8423], 99.50th=[ 8423], 99.90th=[ 8423], 99.95th=[ 8423], 00:18:47.359 | 99.99th=[ 8423] 00:18:47.359 bw ( KiB/s): min= 6144, max=92160, per=1.04%, avg=38456.89, stdev=32410.49, samples=9 00:18:47.359 iops : min= 6, max= 90, avg=37.56, stdev=31.65, samples=9 00:18:47.359 lat (msec) : 250=1.69%, 500=2.36%, 750=3.38%, 1000=3.38%, 2000=41.55% 00:18:47.359 lat (msec) : >=2000=47.64% 00:18:47.359 cpu : usr=0.00%, sys=1.36%, ctx=665, majf=0, minf=32769 00:18:47.359 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.4%, 32=10.8%, >=64=78.7% 00:18:47.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.359 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:18:47.359 issued rwts: total=296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.359 job2: (groupid=0, jobs=1): err= 0: pid=1468189: Fri Nov 15 11:00:34 2024 00:18:47.359 read: IOPS=61, BW=61.7MiB/s (64.7MB/s)(621MiB/10070msec) 00:18:47.359 slat (usec): min=38, max=2068.7k, avg=16152.64, stdev=138353.78 00:18:47.359 clat (msec): min=34, max=4766, avg=1238.70, stdev=1228.19 00:18:47.359 lat (msec): min=73, max=4804, avg=1254.85, stdev=1239.07 00:18:47.359 clat percentiles (msec): 00:18:47.359 | 1.00th=[ 85], 5.00th=[ 506], 10.00th=[ 510], 20.00th=[ 510], 00:18:47.359 | 30.00th=[ 514], 40.00th=[ 518], 50.00th=[ 523], 60.00th=[ 542], 00:18:47.359 | 70.00th=[ 953], 80.00th=[ 2769], 90.00th=[ 3608], 95.00th=[ 3675], 00:18:47.359 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4799], 99.95th=[ 4799], 00:18:47.359 | 99.99th=[ 4799] 00:18:47.359 bw ( KiB/s): min= 2048, max=256000, per=3.41%, avg=126005.88, stdev=102059.57, samples=8 00:18:47.359 iops : min= 2, max= 250, avg=123.00, stdev=99.71, samples=8 00:18:47.359 lat (msec) : 50=0.16%, 100=1.29%, 250=0.81%, 500=1.77%, 750=63.12% 00:18:47.359 lat (msec) : 1000=3.70%, 2000=4.99%, >=2000=24.15% 00:18:47.359 cpu : usr=0.11%, sys=1.29%, ctx=687, majf=0, minf=32769 00:18:47.359 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.9% 00:18:47.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.359 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:18:47.359 issued rwts: total=621,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.359 job2: (groupid=0, jobs=1): err= 0: pid=1468190: Fri Nov 15 11:00:34 2024 00:18:47.359 read: IOPS=13, BW=13.1MiB/s (13.8MB/s)(133MiB/10123msec) 00:18:47.359 slat (usec): min=736, max=2124.5k, avg=75515.36, stdev=354215.60 00:18:47.359 clat (msec): min=78, max=10106, avg=4511.96, stdev=4224.24 00:18:47.359 lat (msec): min=136, max=10110, avg=4587.48, stdev=4234.15 00:18:47.359 clat percentiles (msec): 00:18:47.359 | 1.00th=[ 138], 5.00th=[ 224], 10.00th=[ 330], 20.00th=[ 651], 00:18:47.359 | 30.00th=[ 827], 40.00th=[ 1070], 50.00th=[ 1351], 60.00th=[ 5671], 00:18:47.359 | 70.00th=[ 7819], 80.00th=[10000], 90.00th=[10134], 95.00th=[10134], 00:18:47.359 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:18:47.359 | 99.99th=[10134] 00:18:47.359 bw ( KiB/s): min=10240, max=10240, per=0.28%, avg=10240.00, stdev= 0.00, samples=1 00:18:47.359 iops : min= 10, max= 10, avg=10.00, stdev= 0.00, samples=1 00:18:47.359 lat (msec) : 100=0.75%, 250=4.51%, 500=9.77%, 750=11.28%, 1000=12.03% 00:18:47.359 lat (msec) : 2000=15.04%, >=2000=46.62% 00:18:47.359 cpu : usr=0.00%, sys=0.91%, ctx=331, majf=0, minf=32769 00:18:47.359 IO depths : 1=0.8%, 2=1.5%, 4=3.0%, 8=6.0%, 16=12.0%, 32=24.1%, >=64=52.6% 00:18:47.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.359 complete : 0=0.0%, 4=85.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=14.3% 00:18:47.359 issued rwts: total=133,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.359 job2: (groupid=0, jobs=1): err= 0: pid=1468191: Fri Nov 15 11:00:34 2024 00:18:47.359 read: IOPS=42, BW=42.2MiB/s (44.3MB/s)(426MiB/10092msec) 00:18:47.359 slat (usec): min=37, max=1679.7k, avg=23480.68, stdev=102085.55 00:18:47.359 clat (msec): min=86, max=5940, avg=2757.00, stdev=1668.78 00:18:47.359 lat (msec): min=110, max=5969, avg=2780.48, stdev=1670.02 00:18:47.359 clat percentiles (msec): 00:18:47.359 | 1.00th=[ 129], 5.00th=[ 693], 10.00th=[ 1318], 20.00th=[ 1401], 00:18:47.359 | 30.00th=[ 1536], 40.00th=[ 1821], 50.00th=[ 1989], 60.00th=[ 2366], 00:18:47.359 | 70.00th=[ 3540], 80.00th=[ 5000], 90.00th=[ 5537], 95.00th=[ 5805], 00:18:47.359 | 99.00th=[ 5940], 99.50th=[ 5940], 99.90th=[ 5940], 99.95th=[ 5940], 00:18:47.359 | 99.99th=[ 5940] 00:18:47.359 bw ( KiB/s): min=24576, max=75776, per=1.38%, avg=50924.25, stdev=19221.80, samples=12 00:18:47.359 iops : min= 24, max= 74, avg=49.58, stdev=18.78, samples=12 00:18:47.359 lat (msec) : 100=0.23%, 250=1.64%, 500=2.11%, 750=1.17%, 1000=1.64% 00:18:47.359 lat (msec) : 2000=44.60%, >=2000=48.59% 00:18:47.360 cpu : usr=0.00%, sys=1.08%, ctx=1171, majf=0, minf=32769 00:18:47.360 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.8%, 32=7.5%, >=64=85.2% 00:18:47.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.360 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:18:47.360 issued rwts: total=426,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.360 job2: (groupid=0, jobs=1): err= 0: pid=1468192: Fri Nov 15 11:00:34 2024 00:18:47.360 read: IOPS=49, BW=49.8MiB/s (52.2MB/s)(503MiB/10107msec) 00:18:47.360 slat (usec): min=34, max=2023.0k, avg=19928.33, stdev=91713.19 00:18:47.360 clat (msec): min=79, max=4113, avg=2314.00, stdev=958.62 00:18:47.360 lat (msec): min=120, max=4115, avg=2333.93, stdev=954.33 00:18:47.360 clat percentiles (msec): 00:18:47.360 | 1.00th=[ 140], 5.00th=[ 818], 10.00th=[ 1183], 20.00th=[ 1603], 00:18:47.360 | 30.00th=[ 1955], 40.00th=[ 2039], 50.00th=[ 2165], 60.00th=[ 2265], 00:18:47.360 | 70.00th=[ 2366], 80.00th=[ 3574], 90.00th=[ 3775], 95.00th=[ 3943], 00:18:47.360 | 99.00th=[ 4044], 99.50th=[ 4044], 99.90th=[ 4111], 99.95th=[ 4111], 00:18:47.360 | 99.99th=[ 4111] 00:18:47.360 bw ( KiB/s): min=14336, max=141312, per=1.48%, avg=54857.14, stdev=32363.57, samples=14 00:18:47.360 iops : min= 14, max= 138, avg=53.57, stdev=31.61, samples=14 00:18:47.360 lat (msec) : 100=0.20%, 250=1.19%, 500=1.39%, 750=2.19%, 1000=0.99% 00:18:47.360 lat (msec) : 2000=29.62%, >=2000=64.41% 00:18:47.360 cpu : usr=0.01%, sys=1.37%, ctx=1361, majf=0, minf=32769 00:18:47.360 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.5% 00:18:47.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.360 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:18:47.360 issued rwts: total=503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.360 job2: (groupid=0, jobs=1): err= 0: pid=1468193: Fri Nov 15 11:00:34 2024 00:18:47.360 read: IOPS=43, BW=43.4MiB/s (45.5MB/s)(437MiB/10070msec) 00:18:47.360 slat (usec): min=30, max=2106.2k, avg=22882.87, stdev=173447.63 00:18:47.360 clat (msec): min=67, max=8057, avg=1188.34, stdev=1500.36 00:18:47.360 lat (msec): min=70, max=8121, avg=1211.22, stdev=1535.90 00:18:47.360 clat percentiles (msec): 00:18:47.360 | 1.00th=[ 78], 5.00th=[ 186], 10.00th=[ 292], 20.00th=[ 584], 00:18:47.360 | 30.00th=[ 726], 40.00th=[ 785], 50.00th=[ 802], 60.00th=[ 810], 00:18:47.360 | 70.00th=[ 936], 80.00th=[ 1250], 90.00th=[ 1754], 95.00th=[ 3842], 00:18:47.360 | 99.00th=[ 8020], 99.50th=[ 8020], 99.90th=[ 8087], 99.95th=[ 8087], 00:18:47.360 | 99.99th=[ 8087] 00:18:47.360 bw ( KiB/s): min=14336, max=192512, per=3.41%, avg=126176.60, stdev=68634.72, samples=5 00:18:47.360 iops : min= 14, max= 188, avg=123.20, stdev=67.02, samples=5 00:18:47.360 lat (msec) : 100=3.43%, 250=3.43%, 500=10.76%, 750=15.10%, 1000=40.05% 00:18:47.360 lat (msec) : 2000=21.05%, >=2000=6.18% 00:18:47.360 cpu : usr=0.01%, sys=1.25%, ctx=452, majf=0, minf=32769 00:18:47.360 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.7%, 32=7.3%, >=64=85.6% 00:18:47.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.360 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:18:47.360 issued rwts: total=437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.360 job2: (groupid=0, jobs=1): err= 0: pid=1468194: Fri Nov 15 11:00:34 2024 00:18:47.360 read: IOPS=103, BW=104MiB/s (109MB/s)(1049MiB/10130msec) 00:18:47.360 slat (usec): min=31, max=721979, avg=9567.44, stdev=26645.51 00:18:47.360 clat (msec): min=87, max=3858, avg=1164.76, stdev=699.15 00:18:47.360 lat (msec): min=150, max=3858, avg=1174.33, stdev=701.10 00:18:47.360 clat percentiles (msec): 00:18:47.360 | 1.00th=[ 380], 5.00th=[ 401], 10.00th=[ 456], 20.00th=[ 542], 00:18:47.360 | 30.00th=[ 584], 40.00th=[ 785], 50.00th=[ 978], 60.00th=[ 1150], 00:18:47.360 | 70.00th=[ 1485], 80.00th=[ 1888], 90.00th=[ 2232], 95.00th=[ 2265], 00:18:47.360 | 99.00th=[ 3742], 99.50th=[ 3809], 99.90th=[ 3842], 99.95th=[ 3842], 00:18:47.360 | 99.99th=[ 3842] 00:18:47.360 bw ( KiB/s): min= 6144, max=274432, per=2.69%, avg=99274.11, stdev=80982.71, samples=19 00:18:47.360 iops : min= 6, max= 268, avg=96.95, stdev=79.08, samples=19 00:18:47.360 lat (msec) : 100=0.10%, 250=0.48%, 500=11.06%, 750=27.74%, 1000=12.20% 00:18:47.360 lat (msec) : 2000=31.08%, >=2000=17.35% 00:18:47.360 cpu : usr=0.03%, sys=1.88%, ctx=1702, majf=0, minf=32769 00:18:47.360 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=94.0% 00:18:47.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.360 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.360 issued rwts: total=1049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.360 job2: (groupid=0, jobs=1): err= 0: pid=1468195: Fri Nov 15 11:00:34 2024 00:18:47.360 read: IOPS=105, BW=106MiB/s (111MB/s)(1061MiB/10052msec) 00:18:47.360 slat (usec): min=32, max=2090.1k, avg=9433.58, stdev=90301.49 00:18:47.360 clat (msec): min=37, max=4899, avg=744.91, stdev=625.42 00:18:47.360 lat (msec): min=62, max=4900, avg=754.34, stdev=638.03 00:18:47.360 clat percentiles (msec): 00:18:47.360 | 1.00th=[ 73], 5.00th=[ 409], 10.00th=[ 439], 20.00th=[ 518], 00:18:47.360 | 30.00th=[ 535], 40.00th=[ 542], 50.00th=[ 617], 60.00th=[ 667], 00:18:47.360 | 70.00th=[ 684], 80.00th=[ 735], 90.00th=[ 1200], 95.00th=[ 1351], 00:18:47.360 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4933], 00:18:47.360 | 99.99th=[ 4933] 00:18:47.360 bw ( KiB/s): min=67584, max=282624, per=4.83%, avg=178380.80, stdev=71079.01, samples=10 00:18:47.360 iops : min= 66, max= 276, avg=174.20, stdev=69.41, samples=10 00:18:47.360 lat (msec) : 50=0.09%, 100=1.13%, 250=1.23%, 500=15.83%, 750=62.39% 00:18:47.360 lat (msec) : 1000=6.03%, 2000=11.03%, >=2000=2.26% 00:18:47.360 cpu : usr=0.09%, sys=1.70%, ctx=940, majf=0, minf=32769 00:18:47.360 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.1% 00:18:47.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.360 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.360 issued rwts: total=1061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.360 job2: (groupid=0, jobs=1): err= 0: pid=1468196: Fri Nov 15 11:00:34 2024 00:18:47.360 read: IOPS=120, BW=121MiB/s (126MB/s)(1220MiB/10117msec) 00:18:47.360 slat (usec): min=32, max=2044.6k, avg=8209.13, stdev=62093.44 00:18:47.360 clat (msec): min=94, max=4530, avg=989.18, stdev=1121.88 00:18:47.360 lat (msec): min=148, max=4530, avg=997.39, stdev=1125.44 00:18:47.360 clat percentiles (msec): 00:18:47.360 | 1.00th=[ 397], 5.00th=[ 401], 10.00th=[ 401], 20.00th=[ 405], 00:18:47.360 | 30.00th=[ 422], 40.00th=[ 531], 50.00th=[ 542], 60.00th=[ 584], 00:18:47.360 | 70.00th=[ 869], 80.00th=[ 1036], 90.00th=[ 3473], 95.00th=[ 4279], 00:18:47.360 | 99.00th=[ 4463], 99.50th=[ 4463], 99.90th=[ 4530], 99.95th=[ 4530], 00:18:47.360 | 99.99th=[ 4530] 00:18:47.360 bw ( KiB/s): min= 6144, max=319488, per=4.32%, avg=159711.00, stdev=114581.41, samples=14 00:18:47.360 iops : min= 6, max= 312, avg=155.93, stdev=111.87, samples=14 00:18:47.360 lat (msec) : 100=0.08%, 250=0.33%, 500=35.33%, 750=31.15%, 1000=10.16% 00:18:47.360 lat (msec) : 2000=12.54%, >=2000=10.41% 00:18:47.360 cpu : usr=0.08%, sys=1.92%, ctx=1376, majf=0, minf=32769 00:18:47.360 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.8% 00:18:47.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.360 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.360 issued rwts: total=1220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.360 job2: (groupid=0, jobs=1): err= 0: pid=1468197: Fri Nov 15 11:00:34 2024 00:18:47.360 read: IOPS=60, BW=60.6MiB/s (63.5MB/s)(612MiB/10100msec) 00:18:47.360 slat (usec): min=32, max=1945.5k, avg=16419.06, stdev=107930.66 00:18:47.360 clat (msec): min=47, max=3733, avg=1935.29, stdev=1129.35 00:18:47.360 lat (msec): min=113, max=4210, avg=1951.70, stdev=1130.38 00:18:47.360 clat percentiles (msec): 00:18:47.360 | 1.00th=[ 222], 5.00th=[ 667], 10.00th=[ 709], 20.00th=[ 785], 00:18:47.360 | 30.00th=[ 1200], 40.00th=[ 1284], 50.00th=[ 1318], 60.00th=[ 2836], 00:18:47.360 | 70.00th=[ 3138], 80.00th=[ 3239], 90.00th=[ 3373], 95.00th=[ 3473], 00:18:47.360 | 99.00th=[ 3608], 99.50th=[ 3675], 99.90th=[ 3742], 99.95th=[ 3742], 00:18:47.361 | 99.99th=[ 3742] 00:18:47.361 bw ( KiB/s): min=10240, max=190464, per=2.44%, avg=89990.18, stdev=63934.75, samples=11 00:18:47.361 iops : min= 10, max= 186, avg=87.82, stdev=62.50, samples=11 00:18:47.361 lat (msec) : 50=0.16%, 250=1.14%, 500=0.98%, 750=15.03%, 1000=10.95% 00:18:47.361 lat (msec) : 2000=30.23%, >=2000=41.50% 00:18:47.361 cpu : usr=0.04%, sys=1.31%, ctx=922, majf=0, minf=32769 00:18:47.361 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.7% 00:18:47.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.361 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:18:47.361 issued rwts: total=612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.361 job2: (groupid=0, jobs=1): err= 0: pid=1468198: Fri Nov 15 11:00:34 2024 00:18:47.361 read: IOPS=82, BW=82.5MiB/s (86.5MB/s)(830MiB/10061msec) 00:18:47.361 slat (usec): min=34, max=2117.8k, avg=12049.10, stdev=98199.71 00:18:47.361 clat (msec): min=55, max=5779, avg=1368.34, stdev=1642.87 00:18:47.361 lat (msec): min=132, max=5782, avg=1380.39, stdev=1653.33 00:18:47.361 clat percentiles (msec): 00:18:47.361 | 1.00th=[ 144], 5.00th=[ 334], 10.00th=[ 506], 20.00th=[ 550], 00:18:47.361 | 30.00th=[ 609], 40.00th=[ 625], 50.00th=[ 625], 60.00th=[ 701], 00:18:47.361 | 70.00th=[ 743], 80.00th=[ 885], 90.00th=[ 4866], 95.00th=[ 5403], 00:18:47.361 | 99.00th=[ 5671], 99.50th=[ 5738], 99.90th=[ 5805], 99.95th=[ 5805], 00:18:47.361 | 99.99th=[ 5805] 00:18:47.361 bw ( KiB/s): min=10240, max=251904, per=3.89%, avg=143688.30, stdev=86187.98, samples=10 00:18:47.361 iops : min= 10, max= 246, avg=140.30, stdev=84.16, samples=10 00:18:47.361 lat (msec) : 100=0.12%, 250=3.86%, 500=3.73%, 750=63.01%, 1000=9.28% 00:18:47.361 lat (msec) : 2000=1.93%, >=2000=18.07% 00:18:47.361 cpu : usr=0.09%, sys=1.36%, ctx=985, majf=0, minf=32769 00:18:47.361 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:18:47.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.361 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.361 issued rwts: total=830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.361 job2: (groupid=0, jobs=1): err= 0: pid=1468199: Fri Nov 15 11:00:34 2024 00:18:47.361 read: IOPS=12, BW=12.2MiB/s (12.8MB/s)(123MiB/10085msec) 00:18:47.361 slat (usec): min=430, max=2106.8k, avg=81664.03, stdev=350237.61 00:18:47.361 clat (msec): min=39, max=10069, avg=4102.22, stdev=3845.73 00:18:47.361 lat (msec): min=87, max=10084, avg=4183.88, stdev=3865.36 00:18:47.361 clat percentiles (msec): 00:18:47.361 | 1.00th=[ 88], 5.00th=[ 130], 10.00th=[ 321], 20.00th=[ 667], 00:18:47.361 | 30.00th=[ 1028], 40.00th=[ 1217], 50.00th=[ 3306], 60.00th=[ 3507], 00:18:47.361 | 70.00th=[ 7819], 80.00th=[ 9866], 90.00th=[ 9866], 95.00th=[ 9866], 00:18:47.361 | 99.00th=[10000], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:18:47.361 | 99.99th=[10134] 00:18:47.361 lat (msec) : 50=0.81%, 100=0.81%, 250=5.69%, 500=8.94%, 750=7.32% 00:18:47.361 lat (msec) : 1000=5.69%, 2000=19.51%, >=2000=51.22% 00:18:47.361 cpu : usr=0.00%, sys=0.65%, ctx=377, majf=0, minf=31489 00:18:47.361 IO depths : 1=0.8%, 2=1.6%, 4=3.3%, 8=6.5%, 16=13.0%, 32=26.0%, >=64=48.8% 00:18:47.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.361 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:18:47.361 issued rwts: total=123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.361 job3: (groupid=0, jobs=1): err= 0: pid=1468201: Fri Nov 15 11:00:34 2024 00:18:47.361 read: IOPS=45, BW=45.5MiB/s (47.7MB/s)(460MiB/10119msec) 00:18:47.361 slat (usec): min=49, max=2095.3k, avg=21759.43, stdev=141179.93 00:18:47.361 clat (msec): min=106, max=7268, avg=2709.99, stdev=2064.87 00:18:47.361 lat (msec): min=128, max=7285, avg=2731.75, stdev=2074.48 00:18:47.361 clat percentiles (msec): 00:18:47.361 | 1.00th=[ 228], 5.00th=[ 405], 10.00th=[ 426], 20.00th=[ 592], 00:18:47.361 | 30.00th=[ 1045], 40.00th=[ 1670], 50.00th=[ 1972], 60.00th=[ 3574], 00:18:47.361 | 70.00th=[ 3775], 80.00th=[ 4279], 90.00th=[ 6074], 95.00th=[ 6812], 00:18:47.361 | 99.00th=[ 7215], 99.50th=[ 7282], 99.90th=[ 7282], 99.95th=[ 7282], 00:18:47.361 | 99.99th=[ 7282] 00:18:47.361 bw ( KiB/s): min= 2048, max=172032, per=1.53%, avg=56661.33, stdev=48780.82, samples=12 00:18:47.361 iops : min= 2, max= 168, avg=55.33, stdev=47.64, samples=12 00:18:47.361 lat (msec) : 250=1.09%, 500=11.09%, 750=11.74%, 1000=5.00%, 2000=21.96% 00:18:47.361 lat (msec) : >=2000=49.13% 00:18:47.361 cpu : usr=0.02%, sys=1.53%, ctx=792, majf=0, minf=32769 00:18:47.361 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=7.0%, >=64=86.3% 00:18:47.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.361 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:18:47.361 issued rwts: total=460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.361 job3: (groupid=0, jobs=1): err= 0: pid=1468202: Fri Nov 15 11:00:34 2024 00:18:47.361 read: IOPS=66, BW=66.5MiB/s (69.7MB/s)(671MiB/10096msec) 00:18:47.361 slat (usec): min=32, max=1946.4k, avg=14898.90, stdev=105347.21 00:18:47.361 clat (msec): min=95, max=5004, avg=1828.93, stdev=1517.10 00:18:47.361 lat (msec): min=103, max=5012, avg=1843.83, stdev=1521.82 00:18:47.361 clat percentiles (msec): 00:18:47.361 | 1.00th=[ 157], 5.00th=[ 659], 10.00th=[ 667], 20.00th=[ 667], 00:18:47.361 | 30.00th=[ 676], 40.00th=[ 709], 50.00th=[ 902], 60.00th=[ 1737], 00:18:47.361 | 70.00th=[ 2232], 80.00th=[ 2769], 90.00th=[ 4866], 95.00th=[ 4933], 00:18:47.361 | 99.00th=[ 4933], 99.50th=[ 5000], 99.90th=[ 5000], 99.95th=[ 5000], 00:18:47.361 | 99.99th=[ 5000] 00:18:47.361 bw ( KiB/s): min=20480, max=196608, per=2.32%, avg=85715.69, stdev=60353.78, samples=13 00:18:47.361 iops : min= 20, max= 192, avg=83.69, stdev=58.95, samples=13 00:18:47.361 lat (msec) : 100=0.15%, 250=1.49%, 500=1.64%, 750=41.58%, 1000=6.11% 00:18:47.361 lat (msec) : 2000=12.22%, >=2000=36.81% 00:18:47.361 cpu : usr=0.00%, sys=1.55%, ctx=980, majf=0, minf=32769 00:18:47.361 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.6% 00:18:47.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.361 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:18:47.361 issued rwts: total=671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.361 job3: (groupid=0, jobs=1): err= 0: pid=1468203: Fri Nov 15 11:00:34 2024 00:18:47.361 read: IOPS=140, BW=140MiB/s (147MB/s)(1416MiB/10101msec) 00:18:47.361 slat (usec): min=32, max=2074.9k, avg=7064.16, stdev=56638.38 00:18:47.361 clat (msec): min=94, max=3235, avg=716.81, stdev=411.45 00:18:47.361 lat (msec): min=171, max=3247, avg=723.87, stdev=417.13 00:18:47.361 clat percentiles (msec): 00:18:47.361 | 1.00th=[ 184], 5.00th=[ 506], 10.00th=[ 510], 20.00th=[ 514], 00:18:47.361 | 30.00th=[ 550], 40.00th=[ 584], 50.00th=[ 634], 60.00th=[ 651], 00:18:47.361 | 70.00th=[ 709], 80.00th=[ 810], 90.00th=[ 1020], 95.00th=[ 1133], 00:18:47.361 | 99.00th=[ 3205], 99.50th=[ 3205], 99.90th=[ 3239], 99.95th=[ 3239], 00:18:47.361 | 99.99th=[ 3239] 00:18:47.361 bw ( KiB/s): min=87888, max=251904, per=5.10%, avg=188549.71, stdev=55657.65, samples=14 00:18:47.361 iops : min= 85, max= 246, avg=184.07, stdev=54.47, samples=14 00:18:47.361 lat (msec) : 100=0.07%, 250=1.06%, 500=3.25%, 750=71.96%, 1000=12.64% 00:18:47.361 lat (msec) : 2000=8.90%, >=2000=2.12% 00:18:47.361 cpu : usr=0.03%, sys=1.88%, ctx=1443, majf=0, minf=32769 00:18:47.361 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.6% 00:18:47.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.361 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.361 issued rwts: total=1416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.361 job3: (groupid=0, jobs=1): err= 0: pid=1468204: Fri Nov 15 11:00:34 2024 00:18:47.361 read: IOPS=103, BW=103MiB/s (108MB/s)(1041MiB/10084msec) 00:18:47.361 slat (usec): min=33, max=2038.7k, avg=9601.69, stdev=88573.33 00:18:47.361 clat (msec): min=82, max=5924, avg=1165.75, stdev=1641.38 00:18:47.361 lat (msec): min=84, max=5929, avg=1175.35, stdev=1646.74 00:18:47.361 clat percentiles (msec): 00:18:47.361 | 1.00th=[ 288], 5.00th=[ 401], 10.00th=[ 405], 20.00th=[ 405], 00:18:47.361 | 30.00th=[ 414], 40.00th=[ 451], 50.00th=[ 634], 60.00th=[ 676], 00:18:47.361 | 70.00th=[ 693], 80.00th=[ 735], 90.00th=[ 5134], 95.00th=[ 5738], 00:18:47.361 | 99.00th=[ 5873], 99.50th=[ 5940], 99.90th=[ 5940], 99.95th=[ 5940], 00:18:47.361 | 99.99th=[ 5940] 00:18:47.361 bw ( KiB/s): min= 8192, max=323584, per=3.90%, avg=143960.54, stdev=119537.68, samples=13 00:18:47.361 iops : min= 8, max= 316, avg=140.54, stdev=116.72, samples=13 00:18:47.361 lat (msec) : 100=0.29%, 250=0.48%, 500=43.52%, 750=37.66%, 1000=4.32% 00:18:47.361 lat (msec) : 2000=1.15%, >=2000=12.58% 00:18:47.361 cpu : usr=0.08%, sys=1.75%, ctx=1043, majf=0, minf=32769 00:18:47.361 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=93.9% 00:18:47.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.361 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.361 issued rwts: total=1041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.361 job3: (groupid=0, jobs=1): err= 0: pid=1468205: Fri Nov 15 11:00:34 2024 00:18:47.361 read: IOPS=60, BW=60.7MiB/s (63.7MB/s)(613MiB/10091msec) 00:18:47.361 slat (usec): min=31, max=2101.3k, avg=16327.46, stdev=145181.84 00:18:47.361 clat (msec): min=79, max=5252, avg=1239.40, stdev=1224.44 00:18:47.361 lat (msec): min=137, max=7242, avg=1255.72, stdev=1248.30 00:18:47.362 clat percentiles (msec): 00:18:47.362 | 1.00th=[ 176], 5.00th=[ 405], 10.00th=[ 414], 20.00th=[ 468], 00:18:47.362 | 30.00th=[ 523], 40.00th=[ 531], 50.00th=[ 542], 60.00th=[ 709], 00:18:47.362 | 70.00th=[ 1133], 80.00th=[ 2702], 90.00th=[ 3004], 95.00th=[ 3205], 00:18:47.362 | 99.00th=[ 5201], 99.50th=[ 5201], 99.90th=[ 5269], 99.95th=[ 5269], 00:18:47.362 | 99.99th=[ 5269] 00:18:47.362 bw ( KiB/s): min=20480, max=309248, per=3.84%, avg=141893.57, stdev=98232.85, samples=7 00:18:47.362 iops : min= 20, max= 302, avg=138.43, stdev=96.01, samples=7 00:18:47.362 lat (msec) : 100=0.16%, 250=1.63%, 500=25.29%, 750=34.26%, 1000=6.20% 00:18:47.362 lat (msec) : 2000=8.16%, >=2000=24.31% 00:18:47.362 cpu : usr=0.01%, sys=1.21%, ctx=664, majf=0, minf=32769 00:18:47.362 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.7% 00:18:47.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.362 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:18:47.362 issued rwts: total=613,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.362 job3: (groupid=0, jobs=1): err= 0: pid=1468206: Fri Nov 15 11:00:34 2024 00:18:47.362 read: IOPS=40, BW=40.6MiB/s (42.6MB/s)(407MiB/10015msec) 00:18:47.362 slat (usec): min=31, max=2611.7k, avg=24572.60, stdev=184349.12 00:18:47.362 clat (msec): min=11, max=5209, avg=2071.63, stdev=1596.60 00:18:47.362 lat (msec): min=14, max=5256, avg=2096.21, stdev=1602.02 00:18:47.362 clat percentiles (msec): 00:18:47.362 | 1.00th=[ 16], 5.00th=[ 239], 10.00th=[ 659], 20.00th=[ 693], 00:18:47.362 | 30.00th=[ 735], 40.00th=[ 810], 50.00th=[ 1062], 60.00th=[ 2836], 00:18:47.362 | 70.00th=[ 3440], 80.00th=[ 4010], 90.00th=[ 4178], 95.00th=[ 4866], 00:18:47.362 | 99.00th=[ 5000], 99.50th=[ 5000], 99.90th=[ 5201], 99.95th=[ 5201], 00:18:47.362 | 99.99th=[ 5201] 00:18:47.362 bw ( KiB/s): min= 6144, max=202752, per=2.22%, avg=81885.00, stdev=73063.22, samples=7 00:18:47.362 iops : min= 6, max= 198, avg=79.86, stdev=71.39, samples=7 00:18:47.362 lat (msec) : 20=1.47%, 50=0.74%, 100=1.47%, 250=1.47%, 500=2.21% 00:18:47.362 lat (msec) : 750=27.76%, 1000=13.76%, 2000=7.62%, >=2000=43.49% 00:18:47.362 cpu : usr=0.04%, sys=0.99%, ctx=572, majf=0, minf=32769 00:18:47.362 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=3.9%, 32=7.9%, >=64=84.5% 00:18:47.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.362 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:18:47.362 issued rwts: total=407,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.362 job3: (groupid=0, jobs=1): err= 0: pid=1468207: Fri Nov 15 11:00:34 2024 00:18:47.362 read: IOPS=41, BW=41.8MiB/s (43.8MB/s)(422MiB/10106msec) 00:18:47.362 slat (usec): min=31, max=2111.9k, avg=23717.57, stdev=146418.28 00:18:47.362 clat (msec): min=95, max=8028, avg=2240.71, stdev=1416.21 00:18:47.362 lat (msec): min=124, max=8058, avg=2264.43, stdev=1418.40 00:18:47.362 clat percentiles (msec): 00:18:47.362 | 1.00th=[ 186], 5.00th=[ 676], 10.00th=[ 776], 20.00th=[ 802], 00:18:47.362 | 30.00th=[ 1116], 40.00th=[ 1485], 50.00th=[ 1955], 60.00th=[ 2232], 00:18:47.362 | 70.00th=[ 3440], 80.00th=[ 4044], 90.00th=[ 4212], 95.00th=[ 4279], 00:18:47.362 | 99.00th=[ 4463], 99.50th=[ 7953], 99.90th=[ 8020], 99.95th=[ 8020], 00:18:47.362 | 99.99th=[ 8020] 00:18:47.362 bw ( KiB/s): min= 4087, max=98304, per=1.49%, avg=54889.64, stdev=27346.95, samples=11 00:18:47.362 iops : min= 3, max= 96, avg=53.45, stdev=26.93, samples=11 00:18:47.362 lat (msec) : 100=0.24%, 250=1.18%, 500=1.66%, 750=3.32%, 1000=21.56% 00:18:47.362 lat (msec) : 2000=25.12%, >=2000=46.92% 00:18:47.362 cpu : usr=0.07%, sys=1.14%, ctx=828, majf=0, minf=32769 00:18:47.362 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.1% 00:18:47.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.362 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:18:47.362 issued rwts: total=422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.362 job3: (groupid=0, jobs=1): err= 0: pid=1468208: Fri Nov 15 11:00:34 2024 00:18:47.362 read: IOPS=60, BW=60.6MiB/s (63.6MB/s)(608MiB/10032msec) 00:18:47.362 slat (usec): min=33, max=2096.7k, avg=16443.41, stdev=122849.70 00:18:47.362 clat (msec): min=31, max=5682, avg=1228.90, stdev=1206.57 00:18:47.362 lat (msec): min=45, max=7778, avg=1245.35, stdev=1232.22 00:18:47.362 clat percentiles (msec): 00:18:47.362 | 1.00th=[ 75], 5.00th=[ 506], 10.00th=[ 506], 20.00th=[ 510], 00:18:47.362 | 30.00th=[ 514], 40.00th=[ 518], 50.00th=[ 542], 60.00th=[ 567], 00:18:47.362 | 70.00th=[ 1003], 80.00th=[ 2735], 90.00th=[ 3406], 95.00th=[ 3608], 00:18:47.362 | 99.00th=[ 4799], 99.50th=[ 4866], 99.90th=[ 5671], 99.95th=[ 5671], 00:18:47.362 | 99.99th=[ 5671] 00:18:47.362 bw ( KiB/s): min=10240, max=253952, per=2.82%, avg=104211.33, stdev=105126.86, samples=9 00:18:47.362 iops : min= 10, max= 248, avg=101.67, stdev=102.73, samples=9 00:18:47.362 lat (msec) : 50=0.33%, 100=0.82%, 250=1.48%, 500=1.15%, 750=62.83% 00:18:47.362 lat (msec) : 1000=3.12%, 2000=6.58%, >=2000=23.68% 00:18:47.362 cpu : usr=0.01%, sys=1.33%, ctx=837, majf=0, minf=32769 00:18:47.362 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:18:47.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.362 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:18:47.362 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.362 job3: (groupid=0, jobs=1): err= 0: pid=1468209: Fri Nov 15 11:00:34 2024 00:18:47.362 read: IOPS=21, BW=21.7MiB/s (22.8MB/s)(219MiB/10092msec) 00:18:47.362 slat (usec): min=102, max=2124.1k, avg=45735.82, stdev=243427.66 00:18:47.362 clat (msec): min=74, max=8096, avg=1828.27, stdev=1221.05 00:18:47.362 lat (msec): min=92, max=8120, avg=1874.00, stdev=1292.18 00:18:47.362 clat percentiles (msec): 00:18:47.362 | 1.00th=[ 94], 5.00th=[ 118], 10.00th=[ 326], 20.00th=[ 844], 00:18:47.362 | 30.00th=[ 1351], 40.00th=[ 1821], 50.00th=[ 1921], 60.00th=[ 2056], 00:18:47.362 | 70.00th=[ 2232], 80.00th=[ 2333], 90.00th=[ 2467], 95.00th=[ 3910], 00:18:47.362 | 99.00th=[ 8087], 99.50th=[ 8087], 99.90th=[ 8087], 99.95th=[ 8087], 00:18:47.362 | 99.99th=[ 8087] 00:18:47.362 bw ( KiB/s): min=34816, max=65871, per=1.26%, avg=46675.75, stdev=14277.18, samples=4 00:18:47.362 iops : min= 34, max= 64, avg=45.50, stdev=13.80, samples=4 00:18:47.362 lat (msec) : 100=2.28%, 250=5.02%, 500=6.39%, 750=4.11%, 1000=4.11% 00:18:47.362 lat (msec) : 2000=34.25%, >=2000=43.84% 00:18:47.362 cpu : usr=0.01%, sys=0.68%, ctx=660, majf=0, minf=32769 00:18:47.362 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.7%, 16=7.3%, 32=14.6%, >=64=71.2% 00:18:47.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.362 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:18:47.362 issued rwts: total=219,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.362 job3: (groupid=0, jobs=1): err= 0: pid=1468210: Fri Nov 15 11:00:34 2024 00:18:47.362 read: IOPS=16, BW=16.6MiB/s (17.4MB/s)(167MiB/10070msec) 00:18:47.362 slat (usec): min=495, max=2146.7k, avg=59899.39, stdev=280573.18 00:18:47.362 clat (msec): min=65, max=9955, avg=4044.52, stdev=3692.56 00:18:47.362 lat (msec): min=88, max=9982, avg=4104.41, stdev=3708.66 00:18:47.362 clat percentiles (msec): 00:18:47.362 | 1.00th=[ 89], 5.00th=[ 271], 10.00th=[ 447], 20.00th=[ 1003], 00:18:47.362 | 30.00th=[ 1452], 40.00th=[ 1972], 50.00th=[ 2232], 60.00th=[ 3037], 00:18:47.362 | 70.00th=[ 5604], 80.00th=[ 9329], 90.00th=[ 9731], 95.00th=[ 9866], 00:18:47.362 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:18:47.362 | 99.99th=[10000] 00:18:47.362 bw ( KiB/s): min= 6144, max=40960, per=0.74%, avg=27306.67, stdev=18583.08, samples=3 00:18:47.362 iops : min= 6, max= 40, avg=26.67, stdev=18.15, samples=3 00:18:47.362 lat (msec) : 100=1.20%, 250=3.59%, 500=5.99%, 750=4.79%, 1000=4.79% 00:18:47.362 lat (msec) : 2000=22.16%, >=2000=57.49% 00:18:47.362 cpu : usr=0.00%, sys=1.01%, ctx=630, majf=0, minf=32394 00:18:47.362 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.8%, 16=9.6%, 32=19.2%, >=64=62.3% 00:18:47.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.362 complete : 0=0.0%, 4=97.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.4% 00:18:47.362 issued rwts: total=167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.362 job3: (groupid=0, jobs=1): err= 0: pid=1468211: Fri Nov 15 11:00:34 2024 00:18:47.362 read: IOPS=34, BW=34.6MiB/s (36.3MB/s)(349MiB/10083msec) 00:18:47.362 slat (usec): min=32, max=2080.2k, avg=28649.40, stdev=188976.39 00:18:47.362 clat (msec): min=81, max=8018, avg=2335.59, stdev=2467.16 00:18:47.362 lat (msec): min=94, max=8019, avg=2364.24, stdev=2481.70 00:18:47.362 clat percentiles (msec): 00:18:47.362 | 1.00th=[ 178], 5.00th=[ 418], 10.00th=[ 617], 20.00th=[ 760], 00:18:47.362 | 30.00th=[ 869], 40.00th=[ 1133], 50.00th=[ 1385], 60.00th=[ 1536], 00:18:47.362 | 70.00th=[ 1670], 80.00th=[ 3809], 90.00th=[ 7819], 95.00th=[ 7953], 00:18:47.362 | 99.00th=[ 8020], 99.50th=[ 8020], 99.90th=[ 8020], 99.95th=[ 8020], 00:18:47.362 | 99.99th=[ 8020] 00:18:47.362 bw ( KiB/s): min=36864, max=223232, per=2.46%, avg=90931.20, stdev=77697.24, samples=5 00:18:47.362 iops : min= 36, max= 218, avg=88.80, stdev=75.88, samples=5 00:18:47.362 lat (msec) : 100=0.57%, 250=1.15%, 500=4.01%, 750=13.75%, 1000=16.05% 00:18:47.362 lat (msec) : 2000=42.69%, >=2000=21.78% 00:18:47.362 cpu : usr=0.01%, sys=1.15%, ctx=736, majf=0, minf=32769 00:18:47.362 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.6%, 32=9.2%, >=64=81.9% 00:18:47.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.362 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:18:47.362 issued rwts: total=349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.362 job3: (groupid=0, jobs=1): err= 0: pid=1468212: Fri Nov 15 11:00:34 2024 00:18:47.362 read: IOPS=37, BW=37.1MiB/s (38.9MB/s)(373MiB/10064msec) 00:18:47.362 slat (usec): min=35, max=2068.8k, avg=26814.28, stdev=176203.83 00:18:47.362 clat (msec): min=60, max=5771, avg=2540.37, stdev=2059.51 00:18:47.362 lat (msec): min=65, max=5773, avg=2567.18, stdev=2060.80 00:18:47.362 clat percentiles (msec): 00:18:47.362 | 1.00th=[ 87], 5.00th=[ 326], 10.00th=[ 718], 20.00th=[ 1003], 00:18:47.362 | 30.00th=[ 1028], 40.00th=[ 1062], 50.00th=[ 1167], 60.00th=[ 1368], 00:18:47.362 | 70.00th=[ 4933], 80.00th=[ 5336], 90.00th=[ 5537], 95.00th=[ 5671], 00:18:47.363 | 99.00th=[ 5738], 99.50th=[ 5738], 99.90th=[ 5738], 99.95th=[ 5738], 00:18:47.363 | 99.99th=[ 5738] 00:18:47.363 bw ( KiB/s): min=14336, max=133120, per=1.70%, avg=62985.62, stdev=45033.50, samples=8 00:18:47.363 iops : min= 14, max= 130, avg=61.50, stdev=43.98, samples=8 00:18:47.363 lat (msec) : 100=1.07%, 250=2.68%, 500=3.75%, 750=2.68%, 1000=8.85% 00:18:47.363 lat (msec) : 2000=41.82%, >=2000=39.14% 00:18:47.363 cpu : usr=0.01%, sys=1.02%, ctx=791, majf=0, minf=32769 00:18:47.363 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.3%, 32=8.6%, >=64=83.1% 00:18:47.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.363 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:18:47.363 issued rwts: total=373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.363 job3: (groupid=0, jobs=1): err= 0: pid=1468213: Fri Nov 15 11:00:34 2024 00:18:47.363 read: IOPS=187, BW=188MiB/s (197MB/s)(1891MiB/10081msec) 00:18:47.363 slat (usec): min=34, max=125742, avg=5284.53, stdev=11191.16 00:18:47.363 clat (msec): min=78, max=1621, avg=651.89, stdev=298.73 00:18:47.363 lat (msec): min=81, max=1625, avg=657.17, stdev=300.68 00:18:47.363 clat percentiles (msec): 00:18:47.363 | 1.00th=[ 180], 5.00th=[ 380], 10.00th=[ 380], 20.00th=[ 384], 00:18:47.363 | 30.00th=[ 401], 40.00th=[ 518], 50.00th=[ 575], 60.00th=[ 651], 00:18:47.363 | 70.00th=[ 768], 80.00th=[ 810], 90.00th=[ 1053], 95.00th=[ 1368], 00:18:47.363 | 99.00th=[ 1536], 99.50th=[ 1552], 99.90th=[ 1620], 99.95th=[ 1620], 00:18:47.363 | 99.99th=[ 1620] 00:18:47.363 bw ( KiB/s): min=77824, max=348160, per=5.15%, avg=190122.95, stdev=89011.60, samples=19 00:18:47.363 iops : min= 76, max= 340, avg=185.63, stdev=86.93, samples=19 00:18:47.363 lat (msec) : 100=0.58%, 250=0.95%, 500=33.10%, 750=31.31%, 1000=22.00% 00:18:47.363 lat (msec) : 2000=12.06% 00:18:47.363 cpu : usr=0.03%, sys=2.69%, ctx=1644, majf=0, minf=32769 00:18:47.363 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:18:47.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.363 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.363 issued rwts: total=1891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.363 job4: (groupid=0, jobs=1): err= 0: pid=1468214: Fri Nov 15 11:00:34 2024 00:18:47.363 read: IOPS=29, BW=29.2MiB/s (30.6MB/s)(296MiB/10142msec) 00:18:47.363 slat (usec): min=88, max=2137.4k, avg=33940.38, stdev=210721.67 00:18:47.363 clat (msec): min=94, max=8115, avg=2332.32, stdev=2480.21 00:18:47.363 lat (msec): min=146, max=8130, avg=2366.26, stdev=2498.09 00:18:47.363 clat percentiles (msec): 00:18:47.363 | 1.00th=[ 167], 5.00th=[ 477], 10.00th=[ 701], 20.00th=[ 927], 00:18:47.363 | 30.00th=[ 1083], 40.00th=[ 1250], 50.00th=[ 1401], 60.00th=[ 1586], 00:18:47.363 | 70.00th=[ 1737], 80.00th=[ 1871], 90.00th=[ 8020], 95.00th=[ 8087], 00:18:47.363 | 99.00th=[ 8087], 99.50th=[ 8087], 99.90th=[ 8087], 99.95th=[ 8087], 00:18:47.363 | 99.99th=[ 8087] 00:18:47.363 bw ( KiB/s): min=22528, max=155648, per=1.86%, avg=68812.80, stdev=50656.44, samples=5 00:18:47.363 iops : min= 22, max= 152, avg=67.20, stdev=49.47, samples=5 00:18:47.363 lat (msec) : 100=0.34%, 250=1.35%, 500=4.39%, 750=4.05%, 1000=14.19% 00:18:47.363 lat (msec) : 2000=58.11%, >=2000=17.57% 00:18:47.363 cpu : usr=0.00%, sys=1.04%, ctx=719, majf=0, minf=32769 00:18:47.363 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.4%, 32=10.8%, >=64=78.7% 00:18:47.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.363 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:18:47.363 issued rwts: total=296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.363 job4: (groupid=0, jobs=1): err= 0: pid=1468215: Fri Nov 15 11:00:34 2024 00:18:47.363 read: IOPS=16, BW=16.1MiB/s (16.9MB/s)(162MiB/10046msec) 00:18:47.363 slat (usec): min=54, max=2101.9k, avg=61827.98, stdev=279985.20 00:18:47.363 clat (msec): min=28, max=9330, avg=2101.82, stdev=1854.66 00:18:47.363 lat (msec): min=69, max=9369, avg=2163.65, stdev=1938.89 00:18:47.363 clat percentiles (msec): 00:18:47.363 | 1.00th=[ 70], 5.00th=[ 82], 10.00th=[ 165], 20.00th=[ 659], 00:18:47.363 | 30.00th=[ 1070], 40.00th=[ 1485], 50.00th=[ 1821], 60.00th=[ 2299], 00:18:47.363 | 70.00th=[ 2668], 80.00th=[ 2970], 90.00th=[ 3138], 95.00th=[ 5201], 00:18:47.363 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:18:47.363 | 99.99th=[ 9329] 00:18:47.363 bw ( KiB/s): min=16384, max=53248, per=0.94%, avg=34816.00, stdev=26066.78, samples=2 00:18:47.363 iops : min= 16, max= 52, avg=34.00, stdev=25.46, samples=2 00:18:47.363 lat (msec) : 50=0.62%, 100=9.26%, 250=1.23%, 500=5.56%, 750=4.94% 00:18:47.363 lat (msec) : 1000=5.56%, 2000=28.40%, >=2000=44.44% 00:18:47.363 cpu : usr=0.00%, sys=0.69%, ctx=606, majf=0, minf=32769 00:18:47.363 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=4.9%, 16=9.9%, 32=19.8%, >=64=61.1% 00:18:47.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.363 complete : 0=0.0%, 4=97.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.8% 00:18:47.363 issued rwts: total=162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.363 job4: (groupid=0, jobs=1): err= 0: pid=1468216: Fri Nov 15 11:00:34 2024 00:18:47.363 read: IOPS=104, BW=104MiB/s (109MB/s)(1049MiB/10064msec) 00:18:47.363 slat (usec): min=40, max=1651.2k, avg=9530.73, stdev=53014.98 00:18:47.363 clat (msec): min=59, max=3054, avg=1162.28, stdev=808.81 00:18:47.363 lat (msec): min=109, max=3056, avg=1171.81, stdev=811.53 00:18:47.363 clat percentiles (msec): 00:18:47.363 | 1.00th=[ 284], 5.00th=[ 430], 10.00th=[ 506], 20.00th=[ 659], 00:18:47.363 | 30.00th=[ 676], 40.00th=[ 684], 50.00th=[ 718], 60.00th=[ 751], 00:18:47.363 | 70.00th=[ 1217], 80.00th=[ 2299], 90.00th=[ 2567], 95.00th=[ 2635], 00:18:47.363 | 99.00th=[ 2970], 99.50th=[ 2970], 99.90th=[ 3071], 99.95th=[ 3071], 00:18:47.363 | 99.99th=[ 3071] 00:18:47.363 bw ( KiB/s): min=22528, max=307200, per=3.00%, avg=111033.94, stdev=82486.38, samples=17 00:18:47.363 iops : min= 22, max= 300, avg=108.41, stdev=80.57, samples=17 00:18:47.363 lat (msec) : 100=0.10%, 250=0.67%, 500=8.96%, 750=50.05%, 1000=8.77% 00:18:47.363 lat (msec) : 2000=8.48%, >=2000=22.97% 00:18:47.363 cpu : usr=0.06%, sys=1.87%, ctx=1271, majf=0, minf=32769 00:18:47.363 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=94.0% 00:18:47.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.363 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.363 issued rwts: total=1049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.363 job4: (groupid=0, jobs=1): err= 0: pid=1468217: Fri Nov 15 11:00:34 2024 00:18:47.363 read: IOPS=134, BW=134MiB/s (141MB/s)(1351MiB/10060msec) 00:18:47.363 slat (usec): min=34, max=2011.6k, avg=7398.00, stdev=56756.78 00:18:47.363 clat (msec): min=57, max=2773, avg=895.06, stdev=585.52 00:18:47.363 lat (msec): min=111, max=2775, avg=902.46, stdev=587.34 00:18:47.363 clat percentiles (msec): 00:18:47.363 | 1.00th=[ 133], 5.00th=[ 523], 10.00th=[ 642], 20.00th=[ 667], 00:18:47.363 | 30.00th=[ 676], 40.00th=[ 718], 50.00th=[ 735], 60.00th=[ 768], 00:18:47.363 | 70.00th=[ 793], 80.00th=[ 818], 90.00th=[ 902], 95.00th=[ 2668], 00:18:47.363 | 99.00th=[ 2735], 99.50th=[ 2769], 99.90th=[ 2769], 99.95th=[ 2769], 00:18:47.363 | 99.99th=[ 2769] 00:18:47.363 bw ( KiB/s): min=32768, max=198656, per=4.24%, avg=156672.00, stdev=46539.65, samples=16 00:18:47.363 iops : min= 32, max= 194, avg=153.00, stdev=45.45, samples=16 00:18:47.363 lat (msec) : 100=0.07%, 250=2.29%, 500=2.29%, 750=51.74%, 1000=34.20% 00:18:47.363 lat (msec) : >=2000=9.40% 00:18:47.363 cpu : usr=0.03%, sys=1.87%, ctx=1099, majf=0, minf=32769 00:18:47.363 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:18:47.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.363 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.363 issued rwts: total=1351,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.363 job4: (groupid=0, jobs=1): err= 0: pid=1468218: Fri Nov 15 11:00:34 2024 00:18:47.363 read: IOPS=52, BW=52.1MiB/s (54.6MB/s)(527MiB/10117msec) 00:18:47.363 slat (usec): min=47, max=2094.8k, avg=19039.49, stdev=127778.37 00:18:47.363 clat (msec): min=78, max=7249, avg=2361.14, stdev=2280.57 00:18:47.363 lat (msec): min=171, max=7307, avg=2380.18, stdev=2290.10 00:18:47.363 clat percentiles (msec): 00:18:47.363 | 1.00th=[ 271], 5.00th=[ 684], 10.00th=[ 768], 20.00th=[ 768], 00:18:47.363 | 30.00th=[ 776], 40.00th=[ 785], 50.00th=[ 802], 60.00th=[ 1502], 00:18:47.363 | 70.00th=[ 2903], 80.00th=[ 5403], 90.00th=[ 6409], 95.00th=[ 6879], 00:18:47.363 | 99.00th=[ 7215], 99.50th=[ 7215], 99.90th=[ 7282], 99.95th=[ 7282], 00:18:47.363 | 99.99th=[ 7282] 00:18:47.363 bw ( KiB/s): min= 6144, max=172032, per=1.70%, avg=62857.85, stdev=53688.50, samples=13 00:18:47.363 iops : min= 6, max= 168, avg=61.38, stdev=52.43, samples=13 00:18:47.363 lat (msec) : 100=0.19%, 250=0.57%, 500=1.71%, 750=4.55%, 1000=49.15% 00:18:47.363 lat (msec) : 2000=6.83%, >=2000=37.00% 00:18:47.363 cpu : usr=0.07%, sys=1.57%, ctx=942, majf=0, minf=32769 00:18:47.363 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.1%, >=64=88.0% 00:18:47.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.363 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:18:47.363 issued rwts: total=527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.363 job4: (groupid=0, jobs=1): err= 0: pid=1468219: Fri Nov 15 11:00:34 2024 00:18:47.363 read: IOPS=43, BW=43.3MiB/s (45.4MB/s)(436MiB/10065msec) 00:18:47.363 slat (usec): min=34, max=2141.2k, avg=22932.85, stdev=145033.93 00:18:47.363 clat (msec): min=63, max=7777, avg=2420.15, stdev=1954.88 00:18:47.363 lat (msec): min=104, max=9821, avg=2443.08, stdev=1974.92 00:18:47.363 clat percentiles (msec): 00:18:47.363 | 1.00th=[ 111], 5.00th=[ 347], 10.00th=[ 751], 20.00th=[ 785], 00:18:47.363 | 30.00th=[ 785], 40.00th=[ 818], 50.00th=[ 1603], 60.00th=[ 2836], 00:18:47.363 | 70.00th=[ 3373], 80.00th=[ 4597], 90.00th=[ 5671], 95.00th=[ 6007], 00:18:47.363 | 99.00th=[ 6074], 99.50th=[ 7752], 99.90th=[ 7752], 99.95th=[ 7752], 00:18:47.363 | 99.99th=[ 7752] 00:18:47.363 bw ( KiB/s): min= 4096, max=188416, per=1.43%, avg=52727.75, stdev=59113.43, samples=12 00:18:47.363 iops : min= 4, max= 184, avg=51.42, stdev=57.73, samples=12 00:18:47.363 lat (msec) : 100=0.23%, 250=2.98%, 500=3.21%, 750=4.13%, 1000=34.86% 00:18:47.363 lat (msec) : 2000=7.34%, >=2000=47.25% 00:18:47.364 cpu : usr=0.01%, sys=1.16%, ctx=776, majf=0, minf=32769 00:18:47.364 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.7%, 32=7.3%, >=64=85.6% 00:18:47.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.364 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:18:47.364 issued rwts: total=436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.364 job4: (groupid=0, jobs=1): err= 0: pid=1468220: Fri Nov 15 11:00:34 2024 00:18:47.364 read: IOPS=14, BW=14.0MiB/s (14.7MB/s)(142MiB/10109msec) 00:18:47.364 slat (usec): min=581, max=2107.8k, avg=70625.01, stdev=295695.98 00:18:47.364 clat (msec): min=79, max=10006, avg=3836.11, stdev=3414.86 00:18:47.364 lat (msec): min=165, max=10029, avg=3906.73, stdev=3440.18 00:18:47.364 clat percentiles (msec): 00:18:47.364 | 1.00th=[ 165], 5.00th=[ 460], 10.00th=[ 676], 20.00th=[ 1250], 00:18:47.364 | 30.00th=[ 1653], 40.00th=[ 2165], 50.00th=[ 2500], 60.00th=[ 2970], 00:18:47.364 | 70.00th=[ 3373], 80.00th=[ 9463], 90.00th=[ 9731], 95.00th=[ 9866], 00:18:47.364 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:18:47.364 | 99.99th=[10000] 00:18:47.364 bw ( KiB/s): min= 6144, max=22528, per=0.39%, avg=14336.00, stdev=11585.24, samples=2 00:18:47.364 iops : min= 6, max= 22, avg=14.00, stdev=11.31, samples=2 00:18:47.364 lat (msec) : 100=0.70%, 250=1.41%, 500=4.93%, 750=4.93%, 1000=5.63% 00:18:47.364 lat (msec) : 2000=19.01%, >=2000=63.38% 00:18:47.364 cpu : usr=0.01%, sys=0.83%, ctx=674, majf=0, minf=32769 00:18:47.364 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=5.6%, 16=11.3%, 32=22.5%, >=64=55.6% 00:18:47.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.364 complete : 0=0.0%, 4=93.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=6.2% 00:18:47.364 issued rwts: total=142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.364 job4: (groupid=0, jobs=1): err= 0: pid=1468221: Fri Nov 15 11:00:34 2024 00:18:47.364 read: IOPS=173, BW=174MiB/s (182MB/s)(1764MiB/10143msec) 00:18:47.364 slat (usec): min=36, max=2062.7k, avg=5699.07, stdev=50060.13 00:18:47.364 clat (msec): min=80, max=2813, avg=711.33, stdev=630.83 00:18:47.364 lat (msec): min=161, max=2818, avg=717.03, stdev=633.07 00:18:47.364 clat percentiles (msec): 00:18:47.364 | 1.00th=[ 259], 5.00th=[ 288], 10.00th=[ 313], 20.00th=[ 363], 00:18:47.364 | 30.00th=[ 414], 40.00th=[ 506], 50.00th=[ 510], 60.00th=[ 542], 00:18:47.364 | 70.00th=[ 617], 80.00th=[ 659], 90.00th=[ 1569], 95.00th=[ 2702], 00:18:47.364 | 99.00th=[ 2769], 99.50th=[ 2802], 99.90th=[ 2802], 99.95th=[ 2802], 00:18:47.364 | 99.99th=[ 2802] 00:18:47.364 bw ( KiB/s): min= 4096, max=417792, per=5.67%, avg=209375.75, stdev=116301.75, samples=16 00:18:47.364 iops : min= 4, max= 408, avg=204.44, stdev=113.56, samples=16 00:18:47.364 lat (msec) : 100=0.06%, 250=0.17%, 500=37.81%, 750=46.94%, 1000=0.85% 00:18:47.364 lat (msec) : 2000=6.97%, >=2000=7.20% 00:18:47.364 cpu : usr=0.09%, sys=2.71%, ctx=1607, majf=0, minf=32770 00:18:47.364 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:18:47.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.364 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.364 issued rwts: total=1764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.364 job4: (groupid=0, jobs=1): err= 0: pid=1468222: Fri Nov 15 11:00:34 2024 00:18:47.364 read: IOPS=25, BW=25.1MiB/s (26.3MB/s)(253MiB/10088msec) 00:18:47.364 slat (usec): min=31, max=2114.8k, avg=39640.13, stdev=225162.70 00:18:47.364 clat (msec): min=58, max=9253, avg=1433.35, stdev=1779.53 00:18:47.364 lat (msec): min=132, max=9261, avg=1472.99, stdev=1844.50 00:18:47.364 clat percentiles (msec): 00:18:47.364 | 1.00th=[ 136], 5.00th=[ 148], 10.00th=[ 243], 20.00th=[ 435], 00:18:47.364 | 30.00th=[ 542], 40.00th=[ 735], 50.00th=[ 827], 60.00th=[ 1183], 00:18:47.364 | 70.00th=[ 1519], 80.00th=[ 1938], 90.00th=[ 2500], 95.00th=[ 4866], 00:18:47.364 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:18:47.364 | 99.99th=[ 9194] 00:18:47.364 bw ( KiB/s): min=96256, max=156305, per=3.42%, avg=126280.50, stdev=42461.06, samples=2 00:18:47.364 iops : min= 94, max= 152, avg=123.00, stdev=41.01, samples=2 00:18:47.364 lat (msec) : 100=0.40%, 250=12.25%, 500=12.25%, 750=18.58%, 1000=10.28% 00:18:47.364 lat (msec) : 2000=27.27%, >=2000=18.97% 00:18:47.364 cpu : usr=0.00%, sys=0.75%, ctx=579, majf=0, minf=32769 00:18:47.364 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.3%, 32=12.6%, >=64=75.1% 00:18:47.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.364 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:18:47.364 issued rwts: total=253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.364 job4: (groupid=0, jobs=1): err= 0: pid=1468223: Fri Nov 15 11:00:34 2024 00:18:47.364 read: IOPS=70, BW=70.5MiB/s (73.9MB/s)(710MiB/10074msec) 00:18:47.364 slat (usec): min=28, max=2080.6k, avg=14084.44, stdev=111493.89 00:18:47.364 clat (msec): min=70, max=4921, avg=1052.18, stdev=812.49 00:18:47.364 lat (msec): min=73, max=4922, avg=1066.26, stdev=827.61 00:18:47.364 clat percentiles (msec): 00:18:47.364 | 1.00th=[ 140], 5.00th=[ 456], 10.00th=[ 506], 20.00th=[ 510], 00:18:47.364 | 30.00th=[ 531], 40.00th=[ 584], 50.00th=[ 634], 60.00th=[ 743], 00:18:47.364 | 70.00th=[ 1318], 80.00th=[ 1804], 90.00th=[ 2089], 95.00th=[ 2400], 00:18:47.364 | 99.00th=[ 4933], 99.50th=[ 4933], 99.90th=[ 4933], 99.95th=[ 4933], 00:18:47.364 | 99.99th=[ 4933] 00:18:47.364 bw ( KiB/s): min=32768, max=243225, per=2.98%, avg=109973.60, stdev=88660.74, samples=10 00:18:47.364 iops : min= 32, max= 237, avg=107.30, stdev=86.43, samples=10 00:18:47.364 lat (msec) : 100=0.70%, 250=2.11%, 500=5.07%, 750=52.11%, 1000=4.23% 00:18:47.364 lat (msec) : 2000=23.24%, >=2000=12.54% 00:18:47.364 cpu : usr=0.01%, sys=1.27%, ctx=951, majf=0, minf=32769 00:18:47.364 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:18:47.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.364 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:18:47.364 issued rwts: total=710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.364 job4: (groupid=0, jobs=1): err= 0: pid=1468224: Fri Nov 15 11:00:34 2024 00:18:47.364 read: IOPS=25, BW=25.4MiB/s (26.6MB/s)(256MiB/10094msec) 00:18:47.364 slat (usec): min=35, max=2081.3k, avg=39196.25, stdev=201129.97 00:18:47.364 clat (msec): min=58, max=7662, avg=3013.50, stdev=1933.52 00:18:47.364 lat (msec): min=104, max=7669, avg=3052.70, stdev=1952.50 00:18:47.364 clat percentiles (msec): 00:18:47.364 | 1.00th=[ 111], 5.00th=[ 342], 10.00th=[ 693], 20.00th=[ 1053], 00:18:47.364 | 30.00th=[ 1418], 40.00th=[ 2165], 50.00th=[ 3272], 60.00th=[ 3876], 00:18:47.364 | 70.00th=[ 4044], 80.00th=[ 4144], 90.00th=[ 5604], 95.00th=[ 7148], 00:18:47.364 | 99.00th=[ 7550], 99.50th=[ 7684], 99.90th=[ 7684], 99.95th=[ 7684], 00:18:47.364 | 99.99th=[ 7684] 00:18:47.364 bw ( KiB/s): min=16384, max=65536, per=1.01%, avg=37449.14, stdev=20091.09, samples=7 00:18:47.364 iops : min= 16, max= 64, avg=36.57, stdev=19.62, samples=7 00:18:47.364 lat (msec) : 100=0.39%, 250=3.52%, 500=2.73%, 750=5.47%, 1000=7.42% 00:18:47.364 lat (msec) : 2000=17.97%, >=2000=62.50% 00:18:47.364 cpu : usr=0.02%, sys=0.97%, ctx=711, majf=0, minf=32769 00:18:47.364 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.1%, 16=6.2%, 32=12.5%, >=64=75.4% 00:18:47.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.364 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:18:47.364 issued rwts: total=256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.364 job4: (groupid=0, jobs=1): err= 0: pid=1468225: Fri Nov 15 11:00:34 2024 00:18:47.364 read: IOPS=18, BW=18.8MiB/s (19.7MB/s)(190MiB/10110msec) 00:18:47.364 slat (usec): min=366, max=2101.5k, avg=52721.42, stdev=235112.81 00:18:47.364 clat (msec): min=92, max=8556, avg=2748.17, stdev=1732.26 00:18:47.364 lat (msec): min=116, max=8574, avg=2800.89, stdev=1781.61 00:18:47.364 clat percentiles (msec): 00:18:47.364 | 1.00th=[ 117], 5.00th=[ 317], 10.00th=[ 709], 20.00th=[ 1133], 00:18:47.364 | 30.00th=[ 1401], 40.00th=[ 1938], 50.00th=[ 2601], 60.00th=[ 3171], 00:18:47.364 | 70.00th=[ 4178], 80.00th=[ 4245], 90.00th=[ 4665], 95.00th=[ 4866], 00:18:47.364 | 99.00th=[ 8557], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:18:47.364 | 99.99th=[ 8557] 00:18:47.364 bw ( KiB/s): min= 2048, max=51200, per=0.86%, avg=31744.00, stdev=21512.13, samples=4 00:18:47.364 iops : min= 2, max= 50, avg=31.00, stdev=21.01, samples=4 00:18:47.364 lat (msec) : 100=0.53%, 250=3.68%, 500=3.16%, 750=2.63%, 1000=6.32% 00:18:47.364 lat (msec) : 2000=24.21%, >=2000=59.47% 00:18:47.364 cpu : usr=0.02%, sys=0.80%, ctx=718, majf=0, minf=32769 00:18:47.364 IO depths : 1=0.5%, 2=1.1%, 4=2.1%, 8=4.2%, 16=8.4%, 32=16.8%, >=64=66.8% 00:18:47.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.364 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.6% 00:18:47.364 issued rwts: total=190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.364 job4: (groupid=0, jobs=1): err= 0: pid=1468226: Fri Nov 15 11:00:34 2024 00:18:47.364 read: IOPS=44, BW=44.1MiB/s (46.3MB/s)(446MiB/10102msec) 00:18:47.364 slat (usec): min=33, max=2090.5k, avg=22516.17, stdev=165121.35 00:18:47.364 clat (msec): min=58, max=7084, avg=1176.29, stdev=902.19 00:18:47.364 lat (msec): min=106, max=7123, avg=1198.81, stdev=943.73 00:18:47.364 clat percentiles (msec): 00:18:47.364 | 1.00th=[ 124], 5.00th=[ 351], 10.00th=[ 634], 20.00th=[ 676], 00:18:47.364 | 30.00th=[ 726], 40.00th=[ 844], 50.00th=[ 995], 60.00th=[ 1070], 00:18:47.364 | 70.00th=[ 1301], 80.00th=[ 1519], 90.00th=[ 1653], 95.00th=[ 2668], 00:18:47.364 | 99.00th=[ 7013], 99.50th=[ 7080], 99.90th=[ 7080], 99.95th=[ 7080], 00:18:47.364 | 99.99th=[ 7080] 00:18:47.364 bw ( KiB/s): min=57344, max=176128, per=2.94%, avg=108544.00, stdev=41327.00, samples=6 00:18:47.364 iops : min= 56, max= 172, avg=106.00, stdev=40.36, samples=6 00:18:47.364 lat (msec) : 100=0.22%, 250=2.47%, 500=4.93%, 750=24.89%, 1000=19.06% 00:18:47.364 lat (msec) : 2000=40.81%, >=2000=7.62% 00:18:47.364 cpu : usr=0.02%, sys=0.86%, ctx=763, majf=0, minf=32769 00:18:47.364 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.9% 00:18:47.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.364 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:18:47.364 issued rwts: total=446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.365 job5: (groupid=0, jobs=1): err= 0: pid=1468227: Fri Nov 15 11:00:34 2024 00:18:47.365 read: IOPS=52, BW=52.3MiB/s (54.9MB/s)(568MiB/10857msec) 00:18:47.365 slat (usec): min=33, max=2052.2k, avg=19042.57, stdev=103556.11 00:18:47.365 clat (msec): min=37, max=6021, avg=2300.77, stdev=1486.81 00:18:47.365 lat (msec): min=1086, max=6059, avg=2319.81, stdev=1488.28 00:18:47.365 clat percentiles (msec): 00:18:47.365 | 1.00th=[ 1133], 5.00th=[ 1267], 10.00th=[ 1301], 20.00th=[ 1334], 00:18:47.365 | 30.00th=[ 1385], 40.00th=[ 1418], 50.00th=[ 1485], 60.00th=[ 1586], 00:18:47.365 | 70.00th=[ 2534], 80.00th=[ 3339], 90.00th=[ 5671], 95.00th=[ 5873], 00:18:47.365 | 99.00th=[ 6007], 99.50th=[ 6007], 99.90th=[ 6007], 99.95th=[ 6007], 00:18:47.365 | 99.99th=[ 6007] 00:18:47.365 bw ( KiB/s): min=12288, max=124928, per=1.74%, avg=64342.00, stdev=37350.55, samples=14 00:18:47.365 iops : min= 12, max= 122, avg=62.71, stdev=36.42, samples=14 00:18:47.365 lat (msec) : 50=0.18%, 2000=66.55%, >=2000=33.27% 00:18:47.365 cpu : usr=0.06%, sys=1.34%, ctx=1243, majf=0, minf=32769 00:18:47.365 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=88.9% 00:18:47.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.365 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:18:47.365 issued rwts: total=568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.365 job5: (groupid=0, jobs=1): err= 0: pid=1468228: Fri Nov 15 11:00:34 2024 00:18:47.365 read: IOPS=30, BW=30.6MiB/s (32.1MB/s)(328MiB/10709msec) 00:18:47.365 slat (usec): min=35, max=2138.8k, avg=32627.21, stdev=188979.66 00:18:47.365 clat (msec): min=4, max=6104, avg=3058.31, stdev=1332.62 00:18:47.365 lat (msec): min=1423, max=6106, avg=3090.94, stdev=1327.40 00:18:47.365 clat percentiles (msec): 00:18:47.365 | 1.00th=[ 1418], 5.00th=[ 1469], 10.00th=[ 1485], 20.00th=[ 1586], 00:18:47.365 | 30.00th=[ 1703], 40.00th=[ 2769], 50.00th=[ 3306], 60.00th=[ 3473], 00:18:47.365 | 70.00th=[ 3574], 80.00th=[ 3775], 90.00th=[ 5873], 95.00th=[ 6007], 00:18:47.365 | 99.00th=[ 6074], 99.50th=[ 6074], 99.90th=[ 6074], 99.95th=[ 6074], 00:18:47.365 | 99.99th=[ 6074] 00:18:47.365 bw ( KiB/s): min= 4096, max=92160, per=1.39%, avg=51187.62, stdev=26410.45, samples=8 00:18:47.365 iops : min= 4, max= 90, avg=49.87, stdev=25.80, samples=8 00:18:47.365 lat (msec) : 10=0.30%, 2000=30.18%, >=2000=69.51% 00:18:47.365 cpu : usr=0.01%, sys=0.82%, ctx=943, majf=0, minf=32769 00:18:47.365 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.9%, 32=9.8%, >=64=80.8% 00:18:47.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.365 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:18:47.365 issued rwts: total=328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.365 job5: (groupid=0, jobs=1): err= 0: pid=1468229: Fri Nov 15 11:00:34 2024 00:18:47.365 read: IOPS=85, BW=85.4MiB/s (89.6MB/s)(860MiB/10065msec) 00:18:47.365 slat (usec): min=35, max=2090.6k, avg=11623.85, stdev=101228.19 00:18:47.365 clat (msec): min=63, max=4878, avg=1010.26, stdev=943.00 00:18:47.365 lat (msec): min=66, max=4879, avg=1021.89, stdev=953.05 00:18:47.365 clat percentiles (msec): 00:18:47.365 | 1.00th=[ 73], 5.00th=[ 222], 10.00th=[ 451], 20.00th=[ 514], 00:18:47.365 | 30.00th=[ 550], 40.00th=[ 592], 50.00th=[ 634], 60.00th=[ 676], 00:18:47.365 | 70.00th=[ 877], 80.00th=[ 1519], 90.00th=[ 2140], 95.00th=[ 2702], 00:18:47.365 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:18:47.365 | 99.99th=[ 4866] 00:18:47.365 bw ( KiB/s): min=24576, max=253952, per=3.69%, avg=136425.55, stdev=83865.32, samples=11 00:18:47.365 iops : min= 24, max= 248, avg=133.18, stdev=81.83, samples=11 00:18:47.365 lat (msec) : 100=1.51%, 250=3.60%, 500=5.47%, 750=55.35%, 1000=7.33% 00:18:47.365 lat (msec) : 2000=13.95%, >=2000=12.79% 00:18:47.365 cpu : usr=0.07%, sys=1.56%, ctx=923, majf=0, minf=32769 00:18:47.365 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.7%, >=64=92.7% 00:18:47.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.365 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.365 issued rwts: total=860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.365 job5: (groupid=0, jobs=1): err= 0: pid=1468230: Fri Nov 15 11:00:34 2024 00:18:47.365 read: IOPS=16, BW=16.1MiB/s (16.9MB/s)(173MiB/10726msec) 00:18:47.365 slat (usec): min=463, max=2136.8k, avg=61783.09, stdev=284544.11 00:18:47.365 clat (msec): min=36, max=10277, avg=5881.56, stdev=2621.73 00:18:47.365 lat (msec): min=2172, max=10313, avg=5943.34, stdev=2606.50 00:18:47.365 clat percentiles (msec): 00:18:47.365 | 1.00th=[ 2165], 5.00th=[ 2366], 10.00th=[ 2601], 20.00th=[ 3071], 00:18:47.365 | 30.00th=[ 3608], 40.00th=[ 4010], 50.00th=[ 7282], 60.00th=[ 7617], 00:18:47.365 | 70.00th=[ 7953], 80.00th=[ 8221], 90.00th=[ 8423], 95.00th=[10268], 00:18:47.365 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:18:47.365 | 99.99th=[10268] 00:18:47.365 bw ( KiB/s): min= 4096, max=38912, per=0.62%, avg=23040.00, stdev=16585.43, samples=4 00:18:47.365 iops : min= 4, max= 38, avg=22.50, stdev=16.20, samples=4 00:18:47.365 lat (msec) : 50=0.58%, >=2000=99.42% 00:18:47.365 cpu : usr=0.00%, sys=0.83%, ctx=534, majf=0, minf=32769 00:18:47.365 IO depths : 1=0.6%, 2=1.2%, 4=2.3%, 8=4.6%, 16=9.2%, 32=18.5%, >=64=63.6% 00:18:47.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.365 complete : 0=0.0%, 4=97.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.1% 00:18:47.365 issued rwts: total=173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.365 job5: (groupid=0, jobs=1): err= 0: pid=1468231: Fri Nov 15 11:00:34 2024 00:18:47.365 read: IOPS=105, BW=106MiB/s (111MB/s)(1136MiB/10742msec) 00:18:47.365 slat (usec): min=32, max=2034.2k, avg=9449.87, stdev=83189.35 00:18:47.365 clat (usec): min=1580, max=3279.7k, avg=973401.18, stdev=776324.96 00:18:47.365 lat (msec): min=504, max=3281, avg=982.85, stdev=779.86 00:18:47.365 clat percentiles (msec): 00:18:47.365 | 1.00th=[ 506], 5.00th=[ 510], 10.00th=[ 510], 20.00th=[ 523], 00:18:47.365 | 30.00th=[ 575], 40.00th=[ 634], 50.00th=[ 676], 60.00th=[ 751], 00:18:47.365 | 70.00th=[ 810], 80.00th=[ 911], 90.00th=[ 2802], 95.00th=[ 3037], 00:18:47.365 | 99.00th=[ 3205], 99.50th=[ 3272], 99.90th=[ 3272], 99.95th=[ 3272], 00:18:47.365 | 99.99th=[ 3272] 00:18:47.365 bw ( KiB/s): min=10240, max=253952, per=4.30%, avg=158798.77, stdev=82382.12, samples=13 00:18:47.365 iops : min= 10, max= 248, avg=155.08, stdev=80.45, samples=13 00:18:47.365 lat (msec) : 2=0.09%, 750=58.45%, 1000=25.53%, 2000=2.82%, >=2000=13.12% 00:18:47.365 cpu : usr=0.05%, sys=1.41%, ctx=1277, majf=0, minf=32769 00:18:47.365 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:18:47.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.365 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.365 issued rwts: total=1136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.365 job5: (groupid=0, jobs=1): err= 0: pid=1468232: Fri Nov 15 11:00:34 2024 00:18:47.365 read: IOPS=17, BW=17.2MiB/s (18.0MB/s)(186MiB/10826msec) 00:18:47.365 slat (usec): min=386, max=2093.3k, avg=57992.19, stdev=285048.29 00:18:47.365 clat (msec): min=37, max=7831, avg=4132.84, stdev=1782.60 00:18:47.365 lat (msec): min=2124, max=7833, avg=4190.84, stdev=1786.38 00:18:47.365 clat percentiles (msec): 00:18:47.365 | 1.00th=[ 2123], 5.00th=[ 2265], 10.00th=[ 2400], 20.00th=[ 2769], 00:18:47.365 | 30.00th=[ 3037], 40.00th=[ 3373], 50.00th=[ 3675], 60.00th=[ 3876], 00:18:47.365 | 70.00th=[ 3943], 80.00th=[ 5940], 90.00th=[ 7752], 95.00th=[ 7752], 00:18:47.365 | 99.00th=[ 7819], 99.50th=[ 7819], 99.90th=[ 7819], 99.95th=[ 7819], 00:18:47.365 | 99.99th=[ 7819] 00:18:47.365 bw ( KiB/s): min=26624, max=55296, per=1.07%, avg=39594.67, stdev=14529.74, samples=3 00:18:47.365 iops : min= 26, max= 54, avg=38.67, stdev=14.19, samples=3 00:18:47.365 lat (msec) : 50=0.54%, >=2000=99.46% 00:18:47.365 cpu : usr=0.00%, sys=0.70%, ctx=558, majf=0, minf=32769 00:18:47.365 IO depths : 1=0.5%, 2=1.1%, 4=2.2%, 8=4.3%, 16=8.6%, 32=17.2%, >=64=66.1% 00:18:47.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.365 complete : 0=0.0%, 4=98.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.7% 00:18:47.365 issued rwts: total=186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.365 job5: (groupid=0, jobs=1): err= 0: pid=1468233: Fri Nov 15 11:00:34 2024 00:18:47.365 read: IOPS=53, BW=53.4MiB/s (56.0MB/s)(574MiB/10748msec) 00:18:47.365 slat (usec): min=32, max=2139.4k, avg=18690.13, stdev=151012.65 00:18:47.365 clat (msec): min=16, max=5008, avg=1401.37, stdev=1252.03 00:18:47.365 lat (msec): min=411, max=5140, avg=1420.06, stdev=1264.38 00:18:47.365 clat percentiles (msec): 00:18:47.365 | 1.00th=[ 414], 5.00th=[ 418], 10.00th=[ 451], 20.00th=[ 502], 00:18:47.365 | 30.00th=[ 535], 40.00th=[ 542], 50.00th=[ 575], 60.00th=[ 877], 00:18:47.365 | 70.00th=[ 1804], 80.00th=[ 2668], 90.00th=[ 3507], 95.00th=[ 4279], 00:18:47.366 | 99.00th=[ 4329], 99.50th=[ 4329], 99.90th=[ 5000], 99.95th=[ 5000], 00:18:47.366 | 99.99th=[ 5000] 00:18:47.366 bw ( KiB/s): min=14336, max=286720, per=3.09%, avg=114176.00, stdev=107469.14, samples=8 00:18:47.366 iops : min= 14, max= 280, avg=111.50, stdev=104.95, samples=8 00:18:47.366 lat (msec) : 20=0.17%, 500=19.86%, 750=36.76%, 1000=4.53%, 2000=11.32% 00:18:47.366 lat (msec) : >=2000=27.35% 00:18:47.366 cpu : usr=0.01%, sys=1.13%, ctx=844, majf=0, minf=32769 00:18:47.366 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=89.0% 00:18:47.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.366 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:18:47.366 issued rwts: total=574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.366 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.366 job5: (groupid=0, jobs=1): err= 0: pid=1468234: Fri Nov 15 11:00:34 2024 00:18:47.366 read: IOPS=47, BW=47.5MiB/s (49.8MB/s)(512MiB/10787msec) 00:18:47.366 slat (usec): min=35, max=2027.5k, avg=20989.02, stdev=153368.83 00:18:47.366 clat (msec): min=37, max=5987, avg=2539.29, stdev=1866.54 00:18:47.366 lat (msec): min=762, max=5994, avg=2560.28, stdev=1870.32 00:18:47.366 clat percentiles (msec): 00:18:47.366 | 1.00th=[ 760], 5.00th=[ 768], 10.00th=[ 768], 20.00th=[ 768], 00:18:47.366 | 30.00th=[ 793], 40.00th=[ 802], 50.00th=[ 2333], 60.00th=[ 2769], 00:18:47.366 | 70.00th=[ 3809], 80.00th=[ 4866], 90.00th=[ 5537], 95.00th=[ 5873], 00:18:47.366 | 99.00th=[ 5940], 99.50th=[ 5940], 99.90th=[ 6007], 99.95th=[ 6007], 00:18:47.366 | 99.99th=[ 6007] 00:18:47.366 bw ( KiB/s): min= 4087, max=180224, per=1.93%, avg=71484.36, stdev=59038.92, samples=11 00:18:47.366 iops : min= 3, max= 176, avg=69.64, stdev=57.81, samples=11 00:18:47.366 lat (msec) : 50=0.20%, 1000=45.51%, >=2000=54.30% 00:18:47.366 cpu : usr=0.02%, sys=1.12%, ctx=726, majf=0, minf=32769 00:18:47.366 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.2%, >=64=87.7% 00:18:47.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.366 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:18:47.366 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.366 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.366 job5: (groupid=0, jobs=1): err= 0: pid=1468235: Fri Nov 15 11:00:34 2024 00:18:47.366 read: IOPS=11, BW=11.6MiB/s (12.1MB/s)(124MiB/10705msec) 00:18:47.366 slat (msec): min=2, max=2090, avg=86.28, stdev=350.80 00:18:47.366 clat (msec): min=5, max=10702, avg=4192.10, stdev=2467.53 00:18:47.366 lat (msec): min=2091, max=10704, avg=4278.38, stdev=2506.68 00:18:47.366 clat percentiles (msec): 00:18:47.366 | 1.00th=[ 2089], 5.00th=[ 2165], 10.00th=[ 2265], 20.00th=[ 2467], 00:18:47.366 | 30.00th=[ 2668], 40.00th=[ 3004], 50.00th=[ 3306], 60.00th=[ 3641], 00:18:47.366 | 70.00th=[ 3977], 80.00th=[ 6275], 90.00th=[ 8423], 95.00th=[10671], 00:18:47.366 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:18:47.366 | 99.99th=[10671] 00:18:47.366 lat (msec) : 10=0.81%, >=2000=99.19% 00:18:47.366 cpu : usr=0.00%, sys=0.61%, ctx=450, majf=0, minf=31745 00:18:47.366 IO depths : 1=0.8%, 2=1.6%, 4=3.2%, 8=6.5%, 16=12.9%, 32=25.8%, >=64=49.2% 00:18:47.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.366 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:18:47.366 issued rwts: total=124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.366 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.366 job5: (groupid=0, jobs=1): err= 0: pid=1468236: Fri Nov 15 11:00:34 2024 00:18:47.366 read: IOPS=14, BW=14.8MiB/s (15.5MB/s)(159MiB/10734msec) 00:18:47.366 slat (usec): min=424, max=2160.1k, avg=67470.65, stdev=316816.26 00:18:47.366 clat (msec): min=5, max=8446, avg=3782.51, stdev=1873.03 00:18:47.366 lat (msec): min=1901, max=8494, avg=3849.98, stdev=1884.06 00:18:47.366 clat percentiles (msec): 00:18:47.366 | 1.00th=[ 1905], 5.00th=[ 2022], 10.00th=[ 2198], 20.00th=[ 2400], 00:18:47.366 | 30.00th=[ 2735], 40.00th=[ 3071], 50.00th=[ 3272], 60.00th=[ 3507], 00:18:47.366 | 70.00th=[ 3876], 80.00th=[ 4111], 90.00th=[ 8221], 95.00th=[ 8356], 00:18:47.366 | 99.00th=[ 8423], 99.50th=[ 8423], 99.90th=[ 8423], 99.95th=[ 8423], 00:18:47.366 | 99.99th=[ 8423] 00:18:47.366 bw ( KiB/s): min=30720, max=32768, per=0.86%, avg=31744.00, stdev=1448.15, samples=2 00:18:47.366 iops : min= 30, max= 32, avg=31.00, stdev= 1.41, samples=2 00:18:47.366 lat (msec) : 10=0.63%, 2000=3.14%, >=2000=96.23% 00:18:47.366 cpu : usr=0.01%, sys=0.64%, ctx=480, majf=0, minf=32769 00:18:47.366 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=5.0%, 16=10.1%, 32=20.1%, >=64=60.4% 00:18:47.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.366 complete : 0=0.0%, 4=97.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.0% 00:18:47.366 issued rwts: total=159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.366 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.366 job5: (groupid=0, jobs=1): err= 0: pid=1468237: Fri Nov 15 11:00:34 2024 00:18:47.366 read: IOPS=48, BW=48.6MiB/s (50.9MB/s)(528MiB/10871msec) 00:18:47.366 slat (usec): min=37, max=2076.7k, avg=20510.72, stdev=126243.81 00:18:47.366 clat (msec): min=37, max=6502, avg=2494.92, stdev=1350.85 00:18:47.366 lat (msec): min=1148, max=6521, avg=2515.43, stdev=1354.63 00:18:47.366 clat percentiles (msec): 00:18:47.366 | 1.00th=[ 1167], 5.00th=[ 1217], 10.00th=[ 1284], 20.00th=[ 1334], 00:18:47.366 | 30.00th=[ 1401], 40.00th=[ 1787], 50.00th=[ 1821], 60.00th=[ 2467], 00:18:47.366 | 70.00th=[ 3339], 80.00th=[ 3675], 90.00th=[ 4111], 95.00th=[ 4396], 00:18:47.366 | 99.00th=[ 6477], 99.50th=[ 6477], 99.90th=[ 6477], 99.95th=[ 6477], 00:18:47.366 | 99.99th=[ 6477] 00:18:47.366 bw ( KiB/s): min=18432, max=118784, per=1.70%, avg=62999.62, stdev=30517.19, samples=13 00:18:47.366 iops : min= 18, max= 116, avg=61.46, stdev=29.72, samples=13 00:18:47.366 lat (msec) : 50=0.19%, 2000=56.63%, >=2000=43.18% 00:18:47.366 cpu : usr=0.04%, sys=1.40%, ctx=1203, majf=0, minf=32769 00:18:47.366 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.1%, >=64=88.1% 00:18:47.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.366 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:18:47.366 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.366 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.366 job5: (groupid=0, jobs=1): err= 0: pid=1468238: Fri Nov 15 11:00:34 2024 00:18:47.366 read: IOPS=157, BW=158MiB/s (166MB/s)(1600MiB/10129msec) 00:18:47.366 slat (usec): min=40, max=2038.0k, avg=6260.83, stdev=52094.21 00:18:47.366 clat (msec): min=101, max=2810, avg=774.48, stdev=582.76 00:18:47.366 lat (msec): min=176, max=2812, avg=780.74, stdev=584.79 00:18:47.366 clat percentiles (msec): 00:18:47.366 | 1.00th=[ 203], 5.00th=[ 401], 10.00th=[ 405], 20.00th=[ 414], 00:18:47.366 | 30.00th=[ 481], 40.00th=[ 659], 50.00th=[ 676], 60.00th=[ 693], 00:18:47.366 | 70.00th=[ 726], 80.00th=[ 785], 90.00th=[ 877], 95.00th=[ 2668], 00:18:47.366 | 99.00th=[ 2769], 99.50th=[ 2802], 99.90th=[ 2802], 99.95th=[ 2802], 00:18:47.366 | 99.99th=[ 2802] 00:18:47.366 bw ( KiB/s): min=10240, max=323584, per=5.10%, avg=188376.12, stdev=79502.18, samples=16 00:18:47.366 iops : min= 10, max= 316, avg=183.94, stdev=77.60, samples=16 00:18:47.366 lat (msec) : 250=1.00%, 500=31.00%, 750=43.81%, 1000=16.25%, >=2000=7.94% 00:18:47.366 cpu : usr=0.10%, sys=2.59%, ctx=1277, majf=0, minf=32769 00:18:47.366 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:18:47.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.366 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.366 issued rwts: total=1600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.366 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.366 job5: (groupid=0, jobs=1): err= 0: pid=1468239: Fri Nov 15 11:00:34 2024 00:18:47.366 read: IOPS=34, BW=34.9MiB/s (36.6MB/s)(354MiB/10130msec) 00:18:47.366 slat (usec): min=29, max=2097.5k, avg=28404.66, stdev=178232.34 00:18:47.366 clat (msec): min=72, max=8582, avg=2055.53, stdev=2536.93 00:18:47.366 lat (msec): min=144, max=8595, avg=2083.93, stdev=2559.65 00:18:47.366 clat percentiles (msec): 00:18:47.366 | 1.00th=[ 146], 5.00th=[ 243], 10.00th=[ 338], 20.00th=[ 527], 00:18:47.366 | 30.00th=[ 726], 40.00th=[ 793], 50.00th=[ 852], 60.00th=[ 936], 00:18:47.366 | 70.00th=[ 1687], 80.00th=[ 2366], 90.00th=[ 6477], 95.00th=[ 8490], 00:18:47.366 | 99.00th=[ 8557], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:18:47.366 | 99.99th=[ 8557] 00:18:47.366 bw ( KiB/s): min=24576, max=169984, per=3.13%, avg=115712.00, stdev=66981.39, samples=4 00:18:47.366 iops : min= 24, max= 166, avg=113.00, stdev=65.41, samples=4 00:18:47.366 lat (msec) : 100=0.28%, 250=7.34%, 500=10.17%, 750=17.51%, 1000=25.42% 00:18:47.366 lat (msec) : 2000=12.71%, >=2000=26.55% 00:18:47.366 cpu : usr=0.02%, sys=1.04%, ctx=602, majf=0, minf=32769 00:18:47.366 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.5%, 32=9.0%, >=64=82.2% 00:18:47.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.366 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:18:47.366 issued rwts: total=354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.366 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.366 00:18:47.366 Run status group 0 (all jobs): 00:18:47.366 READ: bw=3609MiB/s (3784MB/s), 3529KiB/s-188MiB/s (3614kB/s-197MB/s), io=38.3GiB (41.1GB), run=10015-10872msec 00:18:47.366 00:18:47.366 Disk stats (read/write): 00:18:47.366 nvme0n1: ios=34276/0, merge=0/0, ticks=6217918/0, in_queue=6217918, util=98.55% 00:18:47.366 nvme1n1: ios=31856/0, merge=0/0, ticks=5268455/0, in_queue=5268455, util=98.61% 00:18:47.366 nvme2n1: ios=59603/0, merge=0/0, ticks=7543712/0, in_queue=7543712, util=98.58% 00:18:47.366 nvme3n1: ios=68301/0, merge=0/0, ticks=7487044/0, in_queue=7487044, util=98.89% 00:18:47.366 nvme4n1: ios=59838/0, merge=0/0, ticks=6576551/0, in_queue=6576551, util=99.01% 00:18:47.366 nvme5n1: ios=56640/0, merge=0/0, ticks=6273265/0, in_queue=6273265, util=99.19% 00:18:47.366 11:00:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:18:47.366 11:00:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:18:47.366 11:00:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:18:47.366 11:00:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:18:49.269 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.269 11:00:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:18:49.269 11:00:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:18:49.269 11:00:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:49.269 11:00:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000000 00:18:49.269 11:00:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:49.269 11:00:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000000 00:18:49.269 11:00:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:18:49.269 11:00:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:49.269 11:00:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.269 11:00:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:49.269 11:00:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.269 11:00:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:18:49.269 11:00:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:51.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:51.170 11:00:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:18:51.170 11:00:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:18:51.170 11:00:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:51.428 11:00:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000001 00:18:51.428 11:00:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000001 00:18:51.428 11:00:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:51.428 11:00:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:18:51.428 11:00:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:51.428 11:00:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.428 11:00:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:51.428 11:00:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.428 11:00:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:18:51.428 11:00:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:18:53.973 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:18:53.973 11:00:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:18:53.973 11:00:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:18:53.973 11:00:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:53.973 11:00:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000002 00:18:53.973 11:00:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:53.973 11:00:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000002 00:18:53.973 11:00:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:18:53.973 11:00:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:53.973 11:00:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.973 11:00:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:53.973 11:00:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.973 11:00:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:18:53.973 11:00:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:18:55.874 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:18:55.874 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:18:55.874 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:18:55.874 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:55.874 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000003 00:18:55.874 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:55.874 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000003 00:18:56.132 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:18:56.132 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:56.132 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.132 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:56.132 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.132 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:18:56.132 11:00:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:18:58.666 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:18:58.666 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:18:58.666 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:18:58.666 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:58.666 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000004 00:18:58.666 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000004 00:18:58.666 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:58.666 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:18:58.666 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:58.666 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.666 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:58.666 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.666 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:18:58.666 11:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:00.568 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:00.568 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:19:00.568 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:19:00.568 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:19:00.568 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000005 00:19:00.568 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:19:00.568 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000005 00:19:00.568 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:19:00.568 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:00.568 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.568 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:00.568 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.569 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:00.569 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:19:00.569 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:00.569 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:19:00.569 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:00.569 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:00.569 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:19:00.569 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:00.569 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:00.569 rmmod nvme_rdma 00:19:00.569 rmmod nvme_fabrics 00:19:00.569 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:00.569 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:19:00.569 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:19:00.569 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@517 -- # '[' -n 1464250 ']' 00:19:00.569 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # killprocess 1464250 00:19:00.569 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@952 -- # '[' -z 1464250 ']' 00:19:00.569 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # kill -0 1464250 00:19:00.569 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@957 -- # uname 00:19:00.569 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:00.569 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1464250 00:19:00.827 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:00.827 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:00.827 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1464250' 00:19:00.827 killing process with pid 1464250 00:19:00.827 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@971 -- # kill 1464250 00:19:00.827 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@976 -- # wait 1464250 00:19:01.086 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:01.086 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:01.086 00:19:01.086 real 0m51.171s 00:19:01.086 user 3m8.788s 00:19:01.086 sys 0m15.789s 00:19:01.086 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:01.086 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:01.086 ************************************ 00:19:01.086 END TEST nvmf_srq_overwhelm 00:19:01.086 ************************************ 00:19:01.086 11:00:49 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:19:01.086 11:00:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:01.086 11:00:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:01.086 11:00:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:01.086 ************************************ 00:19:01.086 START TEST nvmf_shutdown 00:19:01.086 ************************************ 00:19:01.086 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:19:01.086 * Looking for test storage... 00:19:01.086 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:01.086 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:01.086 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:19:01.086 11:00:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:01.346 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:01.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.347 --rc genhtml_branch_coverage=1 00:19:01.347 --rc genhtml_function_coverage=1 00:19:01.347 --rc genhtml_legend=1 00:19:01.347 --rc geninfo_all_blocks=1 00:19:01.347 --rc geninfo_unexecuted_blocks=1 00:19:01.347 00:19:01.347 ' 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:01.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.347 --rc genhtml_branch_coverage=1 00:19:01.347 --rc genhtml_function_coverage=1 00:19:01.347 --rc genhtml_legend=1 00:19:01.347 --rc geninfo_all_blocks=1 00:19:01.347 --rc geninfo_unexecuted_blocks=1 00:19:01.347 00:19:01.347 ' 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:01.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.347 --rc genhtml_branch_coverage=1 00:19:01.347 --rc genhtml_function_coverage=1 00:19:01.347 --rc genhtml_legend=1 00:19:01.347 --rc geninfo_all_blocks=1 00:19:01.347 --rc geninfo_unexecuted_blocks=1 00:19:01.347 00:19:01.347 ' 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:01.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.347 --rc genhtml_branch_coverage=1 00:19:01.347 --rc genhtml_function_coverage=1 00:19:01.347 --rc genhtml_legend=1 00:19:01.347 --rc geninfo_all_blocks=1 00:19:01.347 --rc geninfo_unexecuted_blocks=1 00:19:01.347 00:19:01.347 ' 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:01.347 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:01.347 ************************************ 00:19:01.347 START TEST nvmf_shutdown_tc1 00:19:01.347 ************************************ 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:19:01.347 11:00:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:06.619 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:19:06.620 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:19:06.620 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:19:06.620 Found net devices under 0000:af:00.0: mlx_0_0 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:19:06.620 Found net devices under 0000:af:00.1: mlx_0_1 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # rdma_device_init 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:06.620 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:06.880 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:06.880 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:19:06.880 altname enp175s0f0np0 00:19:06.880 altname ens801f0np0 00:19:06.880 inet 192.168.100.8/24 scope global mlx_0_0 00:19:06.880 valid_lft forever preferred_lft forever 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:06.880 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:06.880 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:06.880 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:19:06.880 altname enp175s0f1np1 00:19:06.880 altname ens801f1np1 00:19:06.880 inet 192.168.100.9/24 scope global mlx_0_1 00:19:06.881 valid_lft forever preferred_lft forever 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:06.881 192.168.100.9' 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:06.881 192.168.100.9' 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # head -n 1 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:06.881 192.168.100.9' 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # tail -n +2 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # head -n 1 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1475338 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1475338 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 1475338 ']' 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:06.881 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:06.881 [2024-11-15 11:00:55.684395] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:19:06.881 [2024-11-15 11:00:55.684439] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.881 [2024-11-15 11:00:55.746773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:07.142 [2024-11-15 11:00:55.790266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.142 [2024-11-15 11:00:55.790297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.142 [2024-11-15 11:00:55.790305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.142 [2024-11-15 11:00:55.790311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.142 [2024-11-15 11:00:55.790316] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.142 [2024-11-15 11:00:55.791897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.142 [2024-11-15 11:00:55.791982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:07.142 [2024-11-15 11:00:55.792102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.142 [2024-11-15 11:00:55.792103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:07.142 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:07.142 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:19:07.142 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:07.142 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:07.142 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:07.142 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.142 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:07.142 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.142 11:00:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:07.142 [2024-11-15 11:00:55.953799] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2353530/0x2357a20) succeed. 00:19:07.142 [2024-11-15 11:00:55.963364] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2354bc0/0x23990c0) succeed. 00:19:07.401 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.401 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:19:07.401 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:19:07.401 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:07.401 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:07.401 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:07.401 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:07.401 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:07.401 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:07.401 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:07.401 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:07.401 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:07.401 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:07.402 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:07.402 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:07.402 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:07.402 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:07.402 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:07.402 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:07.402 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:07.402 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:07.402 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:07.402 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:07.402 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:07.402 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:07.402 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:07.402 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:19:07.402 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.402 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:07.402 Malloc1 00:19:07.402 [2024-11-15 11:00:56.191792] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:07.402 Malloc2 00:19:07.402 Malloc3 00:19:07.661 Malloc4 00:19:07.661 Malloc5 00:19:07.661 Malloc6 00:19:07.661 Malloc7 00:19:07.661 Malloc8 00:19:07.661 Malloc9 00:19:07.921 Malloc10 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1475614 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1475614 /var/tmp/bdevperf.sock 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 1475614 ']' 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:07.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:07.921 { 00:19:07.921 "params": { 00:19:07.921 "name": "Nvme$subsystem", 00:19:07.921 "trtype": "$TEST_TRANSPORT", 00:19:07.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:07.921 "adrfam": "ipv4", 00:19:07.921 "trsvcid": "$NVMF_PORT", 00:19:07.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:07.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:07.921 "hdgst": ${hdgst:-false}, 00:19:07.921 "ddgst": ${ddgst:-false} 00:19:07.921 }, 00:19:07.921 "method": "bdev_nvme_attach_controller" 00:19:07.921 } 00:19:07.921 EOF 00:19:07.921 )") 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:07.921 { 00:19:07.921 "params": { 00:19:07.921 "name": "Nvme$subsystem", 00:19:07.921 "trtype": "$TEST_TRANSPORT", 00:19:07.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:07.921 "adrfam": "ipv4", 00:19:07.921 "trsvcid": "$NVMF_PORT", 00:19:07.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:07.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:07.921 "hdgst": ${hdgst:-false}, 00:19:07.921 "ddgst": ${ddgst:-false} 00:19:07.921 }, 00:19:07.921 "method": "bdev_nvme_attach_controller" 00:19:07.921 } 00:19:07.921 EOF 00:19:07.921 )") 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:07.921 { 00:19:07.921 "params": { 00:19:07.921 "name": "Nvme$subsystem", 00:19:07.921 "trtype": "$TEST_TRANSPORT", 00:19:07.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:07.921 "adrfam": "ipv4", 00:19:07.921 "trsvcid": "$NVMF_PORT", 00:19:07.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:07.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:07.921 "hdgst": ${hdgst:-false}, 00:19:07.921 "ddgst": ${ddgst:-false} 00:19:07.921 }, 00:19:07.921 "method": "bdev_nvme_attach_controller" 00:19:07.921 } 00:19:07.921 EOF 00:19:07.921 )") 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:07.921 { 00:19:07.921 "params": { 00:19:07.921 "name": "Nvme$subsystem", 00:19:07.921 "trtype": "$TEST_TRANSPORT", 00:19:07.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:07.921 "adrfam": "ipv4", 00:19:07.921 "trsvcid": "$NVMF_PORT", 00:19:07.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:07.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:07.921 "hdgst": ${hdgst:-false}, 00:19:07.921 "ddgst": ${ddgst:-false} 00:19:07.921 }, 00:19:07.921 "method": "bdev_nvme_attach_controller" 00:19:07.921 } 00:19:07.921 EOF 00:19:07.921 )") 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:07.921 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:07.922 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:07.922 { 00:19:07.922 "params": { 00:19:07.922 "name": "Nvme$subsystem", 00:19:07.922 "trtype": "$TEST_TRANSPORT", 00:19:07.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:07.922 "adrfam": "ipv4", 00:19:07.922 "trsvcid": "$NVMF_PORT", 00:19:07.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:07.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:07.922 "hdgst": ${hdgst:-false}, 00:19:07.922 "ddgst": ${ddgst:-false} 00:19:07.922 }, 00:19:07.922 "method": "bdev_nvme_attach_controller" 00:19:07.922 } 00:19:07.922 EOF 00:19:07.922 )") 00:19:07.922 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:07.922 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:07.922 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:07.922 { 00:19:07.922 "params": { 00:19:07.922 "name": "Nvme$subsystem", 00:19:07.922 "trtype": "$TEST_TRANSPORT", 00:19:07.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:07.922 "adrfam": "ipv4", 00:19:07.922 "trsvcid": "$NVMF_PORT", 00:19:07.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:07.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:07.922 "hdgst": ${hdgst:-false}, 00:19:07.922 "ddgst": ${ddgst:-false} 00:19:07.922 }, 00:19:07.922 "method": "bdev_nvme_attach_controller" 00:19:07.922 } 00:19:07.922 EOF 00:19:07.922 )") 00:19:07.922 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:07.922 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:07.922 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:07.922 { 00:19:07.922 "params": { 00:19:07.922 "name": "Nvme$subsystem", 00:19:07.922 "trtype": "$TEST_TRANSPORT", 00:19:07.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:07.922 "adrfam": "ipv4", 00:19:07.922 "trsvcid": "$NVMF_PORT", 00:19:07.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:07.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:07.922 "hdgst": ${hdgst:-false}, 00:19:07.922 "ddgst": ${ddgst:-false} 00:19:07.922 }, 00:19:07.922 "method": "bdev_nvme_attach_controller" 00:19:07.922 } 00:19:07.922 EOF 00:19:07.922 )") 00:19:07.922 [2024-11-15 11:00:56.672591] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:19:07.922 [2024-11-15 11:00:56.672640] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:07.922 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:07.922 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:07.922 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:07.922 { 00:19:07.922 "params": { 00:19:07.922 "name": "Nvme$subsystem", 00:19:07.922 "trtype": "$TEST_TRANSPORT", 00:19:07.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:07.922 "adrfam": "ipv4", 00:19:07.922 "trsvcid": "$NVMF_PORT", 00:19:07.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:07.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:07.922 "hdgst": ${hdgst:-false}, 00:19:07.922 "ddgst": ${ddgst:-false} 00:19:07.922 }, 00:19:07.922 "method": "bdev_nvme_attach_controller" 00:19:07.922 } 00:19:07.922 EOF 00:19:07.922 )") 00:19:07.922 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:07.922 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:07.922 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:07.922 { 00:19:07.922 "params": { 00:19:07.922 "name": "Nvme$subsystem", 00:19:07.922 "trtype": "$TEST_TRANSPORT", 00:19:07.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:07.922 "adrfam": "ipv4", 00:19:07.922 "trsvcid": "$NVMF_PORT", 00:19:07.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:07.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:07.922 "hdgst": ${hdgst:-false}, 00:19:07.922 "ddgst": ${ddgst:-false} 00:19:07.922 }, 00:19:07.922 "method": "bdev_nvme_attach_controller" 00:19:07.922 } 00:19:07.922 EOF 00:19:07.922 )") 00:19:07.922 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:07.922 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:07.922 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:07.922 { 00:19:07.922 "params": { 00:19:07.922 "name": "Nvme$subsystem", 00:19:07.922 "trtype": "$TEST_TRANSPORT", 00:19:07.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:07.922 "adrfam": "ipv4", 00:19:07.922 "trsvcid": "$NVMF_PORT", 00:19:07.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:07.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:07.922 "hdgst": ${hdgst:-false}, 00:19:07.922 "ddgst": ${ddgst:-false} 00:19:07.922 }, 00:19:07.922 "method": "bdev_nvme_attach_controller" 00:19:07.922 } 00:19:07.922 EOF 00:19:07.922 )") 00:19:07.922 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:07.922 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:19:07.922 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:19:07.922 11:00:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:07.922 "params": { 00:19:07.922 "name": "Nvme1", 00:19:07.922 "trtype": "rdma", 00:19:07.922 "traddr": "192.168.100.8", 00:19:07.922 "adrfam": "ipv4", 00:19:07.922 "trsvcid": "4420", 00:19:07.922 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.922 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:07.922 "hdgst": false, 00:19:07.922 "ddgst": false 00:19:07.922 }, 00:19:07.922 "method": "bdev_nvme_attach_controller" 00:19:07.922 },{ 00:19:07.922 "params": { 00:19:07.922 "name": "Nvme2", 00:19:07.922 "trtype": "rdma", 00:19:07.922 "traddr": "192.168.100.8", 00:19:07.922 "adrfam": "ipv4", 00:19:07.922 "trsvcid": "4420", 00:19:07.922 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:07.922 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:07.922 "hdgst": false, 00:19:07.922 "ddgst": false 00:19:07.922 }, 00:19:07.922 "method": "bdev_nvme_attach_controller" 00:19:07.922 },{ 00:19:07.922 "params": { 00:19:07.922 "name": "Nvme3", 00:19:07.922 "trtype": "rdma", 00:19:07.922 "traddr": "192.168.100.8", 00:19:07.922 "adrfam": "ipv4", 00:19:07.922 "trsvcid": "4420", 00:19:07.922 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:07.922 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:07.922 "hdgst": false, 00:19:07.922 "ddgst": false 00:19:07.922 }, 00:19:07.922 "method": "bdev_nvme_attach_controller" 00:19:07.922 },{ 00:19:07.922 "params": { 00:19:07.922 "name": "Nvme4", 00:19:07.922 "trtype": "rdma", 00:19:07.922 "traddr": "192.168.100.8", 00:19:07.922 "adrfam": "ipv4", 00:19:07.922 "trsvcid": "4420", 00:19:07.922 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:07.922 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:07.922 "hdgst": false, 00:19:07.922 "ddgst": false 00:19:07.922 }, 00:19:07.922 "method": "bdev_nvme_attach_controller" 00:19:07.922 },{ 00:19:07.922 "params": { 00:19:07.922 "name": "Nvme5", 00:19:07.922 "trtype": "rdma", 00:19:07.922 "traddr": "192.168.100.8", 00:19:07.922 "adrfam": "ipv4", 00:19:07.922 "trsvcid": "4420", 00:19:07.922 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:07.922 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:07.922 "hdgst": false, 00:19:07.922 "ddgst": false 00:19:07.922 }, 00:19:07.922 "method": "bdev_nvme_attach_controller" 00:19:07.922 },{ 00:19:07.922 "params": { 00:19:07.922 "name": "Nvme6", 00:19:07.922 "trtype": "rdma", 00:19:07.922 "traddr": "192.168.100.8", 00:19:07.922 "adrfam": "ipv4", 00:19:07.922 "trsvcid": "4420", 00:19:07.922 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:07.922 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:07.922 "hdgst": false, 00:19:07.922 "ddgst": false 00:19:07.922 }, 00:19:07.922 "method": "bdev_nvme_attach_controller" 00:19:07.922 },{ 00:19:07.922 "params": { 00:19:07.922 "name": "Nvme7", 00:19:07.922 "trtype": "rdma", 00:19:07.922 "traddr": "192.168.100.8", 00:19:07.922 "adrfam": "ipv4", 00:19:07.922 "trsvcid": "4420", 00:19:07.922 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:07.922 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:07.922 "hdgst": false, 00:19:07.922 "ddgst": false 00:19:07.922 }, 00:19:07.922 "method": "bdev_nvme_attach_controller" 00:19:07.922 },{ 00:19:07.922 "params": { 00:19:07.922 "name": "Nvme8", 00:19:07.922 "trtype": "rdma", 00:19:07.922 "traddr": "192.168.100.8", 00:19:07.922 "adrfam": "ipv4", 00:19:07.922 "trsvcid": "4420", 00:19:07.922 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:07.922 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:07.922 "hdgst": false, 00:19:07.922 "ddgst": false 00:19:07.922 }, 00:19:07.922 "method": "bdev_nvme_attach_controller" 00:19:07.923 },{ 00:19:07.923 "params": { 00:19:07.923 "name": "Nvme9", 00:19:07.923 "trtype": "rdma", 00:19:07.923 "traddr": "192.168.100.8", 00:19:07.923 "adrfam": "ipv4", 00:19:07.923 "trsvcid": "4420", 00:19:07.923 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:07.923 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:07.923 "hdgst": false, 00:19:07.923 "ddgst": false 00:19:07.923 }, 00:19:07.923 "method": "bdev_nvme_attach_controller" 00:19:07.923 },{ 00:19:07.923 "params": { 00:19:07.923 "name": "Nvme10", 00:19:07.923 "trtype": "rdma", 00:19:07.923 "traddr": "192.168.100.8", 00:19:07.923 "adrfam": "ipv4", 00:19:07.923 "trsvcid": "4420", 00:19:07.923 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:07.923 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:07.923 "hdgst": false, 00:19:07.923 "ddgst": false 00:19:07.923 }, 00:19:07.923 "method": "bdev_nvme_attach_controller" 00:19:07.923 }' 00:19:07.923 [2024-11-15 11:00:56.737958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.923 [2024-11-15 11:00:56.779545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.858 11:00:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:08.858 11:00:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:19:08.858 11:00:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:08.858 11:00:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.858 11:00:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:08.858 11:00:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.858 11:00:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1475614 00:19:08.858 11:00:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:19:08.858 11:00:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:19:09.793 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1475614 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:09.793 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1475338 00:19:09.793 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:09.793 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:09.793 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:19:09.793 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:19:09.793 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:09.793 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:09.793 { 00:19:09.793 "params": { 00:19:09.793 "name": "Nvme$subsystem", 00:19:09.793 "trtype": "$TEST_TRANSPORT", 00:19:09.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.793 "adrfam": "ipv4", 00:19:09.793 "trsvcid": "$NVMF_PORT", 00:19:09.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.793 "hdgst": ${hdgst:-false}, 00:19:09.793 "ddgst": ${ddgst:-false} 00:19:09.793 }, 00:19:09.793 "method": "bdev_nvme_attach_controller" 00:19:09.793 } 00:19:09.793 EOF 00:19:09.793 )") 00:19:09.793 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:09.793 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:09.793 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:09.793 { 00:19:09.793 "params": { 00:19:09.793 "name": "Nvme$subsystem", 00:19:09.793 "trtype": "$TEST_TRANSPORT", 00:19:09.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.793 "adrfam": "ipv4", 00:19:09.793 "trsvcid": "$NVMF_PORT", 00:19:09.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.793 "hdgst": ${hdgst:-false}, 00:19:09.793 "ddgst": ${ddgst:-false} 00:19:09.793 }, 00:19:09.793 "method": "bdev_nvme_attach_controller" 00:19:09.793 } 00:19:09.793 EOF 00:19:09.793 )") 00:19:09.793 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:09.793 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:09.793 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:09.793 { 00:19:09.793 "params": { 00:19:09.793 "name": "Nvme$subsystem", 00:19:09.793 "trtype": "$TEST_TRANSPORT", 00:19:09.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.793 "adrfam": "ipv4", 00:19:09.793 "trsvcid": "$NVMF_PORT", 00:19:09.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.793 "hdgst": ${hdgst:-false}, 00:19:09.793 "ddgst": ${ddgst:-false} 00:19:09.793 }, 00:19:09.793 "method": "bdev_nvme_attach_controller" 00:19:09.793 } 00:19:09.793 EOF 00:19:09.793 )") 00:19:09.793 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:09.793 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:09.793 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:09.793 { 00:19:09.793 "params": { 00:19:09.793 "name": "Nvme$subsystem", 00:19:09.793 "trtype": "$TEST_TRANSPORT", 00:19:09.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.793 "adrfam": "ipv4", 00:19:09.793 "trsvcid": "$NVMF_PORT", 00:19:09.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.793 "hdgst": ${hdgst:-false}, 00:19:09.793 "ddgst": ${ddgst:-false} 00:19:09.793 }, 00:19:09.793 "method": "bdev_nvme_attach_controller" 00:19:09.793 } 00:19:09.793 EOF 00:19:09.793 )") 00:19:09.793 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:09.793 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:09.793 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:09.793 { 00:19:09.793 "params": { 00:19:09.793 "name": "Nvme$subsystem", 00:19:09.793 "trtype": "$TEST_TRANSPORT", 00:19:09.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.793 "adrfam": "ipv4", 00:19:09.793 "trsvcid": "$NVMF_PORT", 00:19:09.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.793 "hdgst": ${hdgst:-false}, 00:19:09.793 "ddgst": ${ddgst:-false} 00:19:09.793 }, 00:19:09.793 "method": "bdev_nvme_attach_controller" 00:19:09.793 } 00:19:09.793 EOF 00:19:09.793 )") 00:19:09.793 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:10.052 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:10.052 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:10.052 { 00:19:10.052 "params": { 00:19:10.052 "name": "Nvme$subsystem", 00:19:10.052 "trtype": "$TEST_TRANSPORT", 00:19:10.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.052 "adrfam": "ipv4", 00:19:10.052 "trsvcid": "$NVMF_PORT", 00:19:10.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.052 "hdgst": ${hdgst:-false}, 00:19:10.052 "ddgst": ${ddgst:-false} 00:19:10.052 }, 00:19:10.053 "method": "bdev_nvme_attach_controller" 00:19:10.053 } 00:19:10.053 EOF 00:19:10.053 )") 00:19:10.053 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:10.053 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:10.053 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:10.053 { 00:19:10.053 "params": { 00:19:10.053 "name": "Nvme$subsystem", 00:19:10.053 "trtype": "$TEST_TRANSPORT", 00:19:10.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.053 "adrfam": "ipv4", 00:19:10.053 "trsvcid": "$NVMF_PORT", 00:19:10.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.053 "hdgst": ${hdgst:-false}, 00:19:10.053 "ddgst": ${ddgst:-false} 00:19:10.053 }, 00:19:10.053 "method": "bdev_nvme_attach_controller" 00:19:10.053 } 00:19:10.053 EOF 00:19:10.053 )") 00:19:10.053 [2024-11-15 11:00:58.689624] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:19:10.053 [2024-11-15 11:00:58.689673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1475881 ] 00:19:10.053 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:10.053 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:10.053 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:10.053 { 00:19:10.053 "params": { 00:19:10.053 "name": "Nvme$subsystem", 00:19:10.053 "trtype": "$TEST_TRANSPORT", 00:19:10.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.053 "adrfam": "ipv4", 00:19:10.053 "trsvcid": "$NVMF_PORT", 00:19:10.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.053 "hdgst": ${hdgst:-false}, 00:19:10.053 "ddgst": ${ddgst:-false} 00:19:10.053 }, 00:19:10.053 "method": "bdev_nvme_attach_controller" 00:19:10.053 } 00:19:10.053 EOF 00:19:10.053 )") 00:19:10.053 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:10.053 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:10.053 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:10.053 { 00:19:10.053 "params": { 00:19:10.053 "name": "Nvme$subsystem", 00:19:10.053 "trtype": "$TEST_TRANSPORT", 00:19:10.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.053 "adrfam": "ipv4", 00:19:10.053 "trsvcid": "$NVMF_PORT", 00:19:10.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.053 "hdgst": ${hdgst:-false}, 00:19:10.053 "ddgst": ${ddgst:-false} 00:19:10.053 }, 00:19:10.053 "method": "bdev_nvme_attach_controller" 00:19:10.053 } 00:19:10.053 EOF 00:19:10.053 )") 00:19:10.053 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:10.053 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:10.053 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:10.053 { 00:19:10.053 "params": { 00:19:10.053 "name": "Nvme$subsystem", 00:19:10.053 "trtype": "$TEST_TRANSPORT", 00:19:10.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.053 "adrfam": "ipv4", 00:19:10.053 "trsvcid": "$NVMF_PORT", 00:19:10.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.053 "hdgst": ${hdgst:-false}, 00:19:10.053 "ddgst": ${ddgst:-false} 00:19:10.053 }, 00:19:10.053 "method": "bdev_nvme_attach_controller" 00:19:10.053 } 00:19:10.053 EOF 00:19:10.053 )") 00:19:10.053 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:10.053 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:19:10.053 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:19:10.053 11:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:10.053 "params": { 00:19:10.053 "name": "Nvme1", 00:19:10.053 "trtype": "rdma", 00:19:10.053 "traddr": "192.168.100.8", 00:19:10.053 "adrfam": "ipv4", 00:19:10.053 "trsvcid": "4420", 00:19:10.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:10.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:10.053 "hdgst": false, 00:19:10.053 "ddgst": false 00:19:10.053 }, 00:19:10.053 "method": "bdev_nvme_attach_controller" 00:19:10.053 },{ 00:19:10.053 "params": { 00:19:10.053 "name": "Nvme2", 00:19:10.053 "trtype": "rdma", 00:19:10.053 "traddr": "192.168.100.8", 00:19:10.053 "adrfam": "ipv4", 00:19:10.053 "trsvcid": "4420", 00:19:10.053 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:10.053 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:10.053 "hdgst": false, 00:19:10.053 "ddgst": false 00:19:10.053 }, 00:19:10.053 "method": "bdev_nvme_attach_controller" 00:19:10.053 },{ 00:19:10.053 "params": { 00:19:10.053 "name": "Nvme3", 00:19:10.053 "trtype": "rdma", 00:19:10.053 "traddr": "192.168.100.8", 00:19:10.053 "adrfam": "ipv4", 00:19:10.053 "trsvcid": "4420", 00:19:10.053 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:10.053 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:10.053 "hdgst": false, 00:19:10.053 "ddgst": false 00:19:10.053 }, 00:19:10.053 "method": "bdev_nvme_attach_controller" 00:19:10.053 },{ 00:19:10.053 "params": { 00:19:10.053 "name": "Nvme4", 00:19:10.053 "trtype": "rdma", 00:19:10.053 "traddr": "192.168.100.8", 00:19:10.053 "adrfam": "ipv4", 00:19:10.053 "trsvcid": "4420", 00:19:10.053 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:10.053 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:10.053 "hdgst": false, 00:19:10.053 "ddgst": false 00:19:10.053 }, 00:19:10.053 "method": "bdev_nvme_attach_controller" 00:19:10.053 },{ 00:19:10.053 "params": { 00:19:10.053 "name": "Nvme5", 00:19:10.053 "trtype": "rdma", 00:19:10.053 "traddr": "192.168.100.8", 00:19:10.053 "adrfam": "ipv4", 00:19:10.053 "trsvcid": "4420", 00:19:10.053 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:10.053 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:10.053 "hdgst": false, 00:19:10.053 "ddgst": false 00:19:10.053 }, 00:19:10.053 "method": "bdev_nvme_attach_controller" 00:19:10.053 },{ 00:19:10.053 "params": { 00:19:10.053 "name": "Nvme6", 00:19:10.053 "trtype": "rdma", 00:19:10.053 "traddr": "192.168.100.8", 00:19:10.053 "adrfam": "ipv4", 00:19:10.053 "trsvcid": "4420", 00:19:10.053 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:10.053 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:10.053 "hdgst": false, 00:19:10.053 "ddgst": false 00:19:10.053 }, 00:19:10.053 "method": "bdev_nvme_attach_controller" 00:19:10.053 },{ 00:19:10.053 "params": { 00:19:10.053 "name": "Nvme7", 00:19:10.053 "trtype": "rdma", 00:19:10.053 "traddr": "192.168.100.8", 00:19:10.053 "adrfam": "ipv4", 00:19:10.053 "trsvcid": "4420", 00:19:10.053 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:10.053 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:10.053 "hdgst": false, 00:19:10.053 "ddgst": false 00:19:10.053 }, 00:19:10.053 "method": "bdev_nvme_attach_controller" 00:19:10.053 },{ 00:19:10.053 "params": { 00:19:10.053 "name": "Nvme8", 00:19:10.053 "trtype": "rdma", 00:19:10.053 "traddr": "192.168.100.8", 00:19:10.053 "adrfam": "ipv4", 00:19:10.053 "trsvcid": "4420", 00:19:10.053 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:10.053 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:10.053 "hdgst": false, 00:19:10.053 "ddgst": false 00:19:10.053 }, 00:19:10.053 "method": "bdev_nvme_attach_controller" 00:19:10.053 },{ 00:19:10.053 "params": { 00:19:10.053 "name": "Nvme9", 00:19:10.053 "trtype": "rdma", 00:19:10.053 "traddr": "192.168.100.8", 00:19:10.053 "adrfam": "ipv4", 00:19:10.053 "trsvcid": "4420", 00:19:10.053 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:10.053 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:10.053 "hdgst": false, 00:19:10.053 "ddgst": false 00:19:10.053 }, 00:19:10.053 "method": "bdev_nvme_attach_controller" 00:19:10.053 },{ 00:19:10.053 "params": { 00:19:10.053 "name": "Nvme10", 00:19:10.053 "trtype": "rdma", 00:19:10.053 "traddr": "192.168.100.8", 00:19:10.053 "adrfam": "ipv4", 00:19:10.053 "trsvcid": "4420", 00:19:10.053 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:10.053 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:10.053 "hdgst": false, 00:19:10.053 "ddgst": false 00:19:10.053 }, 00:19:10.053 "method": "bdev_nvme_attach_controller" 00:19:10.053 }' 00:19:10.053 [2024-11-15 11:00:58.755917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.054 [2024-11-15 11:00:58.798120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.053 Running I/O for 1 seconds... 00:19:12.249 3062.00 IOPS, 191.38 MiB/s 00:19:12.249 Latency(us) 00:19:12.249 [2024-11-15T10:01:01.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.249 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:12.249 Verification LBA range: start 0x0 length 0x400 00:19:12.249 Nvme1n1 : 1.19 336.00 21.00 0.00 0.00 180082.39 7408.42 237069.36 00:19:12.249 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:12.249 Verification LBA range: start 0x0 length 0x400 00:19:12.249 Nvme2n1 : 1.19 344.90 21.56 0.00 0.00 171510.48 11910.46 170507.58 00:19:12.249 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:12.249 Verification LBA range: start 0x0 length 0x400 00:19:12.249 Nvme3n1 : 1.20 375.65 23.48 0.00 0.00 162910.88 4843.97 159565.91 00:19:12.249 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:12.249 Verification LBA range: start 0x0 length 0x400 00:19:12.249 Nvme4n1 : 1.20 375.25 23.45 0.00 0.00 160561.08 5470.83 152271.47 00:19:12.249 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:12.249 Verification LBA range: start 0x0 length 0x400 00:19:12.249 Nvme5n1 : 1.20 372.25 23.27 0.00 0.00 159870.19 7978.30 141329.81 00:19:12.249 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:12.249 Verification LBA range: start 0x0 length 0x400 00:19:12.249 Nvme6n1 : 1.20 371.79 23.24 0.00 0.00 157719.85 8605.16 132211.76 00:19:12.249 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:12.249 Verification LBA range: start 0x0 length 0x400 00:19:12.249 Nvme7n1 : 1.21 371.37 23.21 0.00 0.00 155482.32 9061.06 124005.51 00:19:12.249 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:12.249 Verification LBA range: start 0x0 length 0x400 00:19:12.249 Nvme8n1 : 1.21 370.96 23.18 0.00 0.00 153378.73 9402.99 116255.17 00:19:12.250 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:12.250 Verification LBA range: start 0x0 length 0x400 00:19:12.250 Nvme9n1 : 1.21 370.48 23.15 0.00 0.00 151557.47 10029.86 104857.60 00:19:12.250 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:12.250 Verification LBA range: start 0x0 length 0x400 00:19:12.250 Nvme10n1 : 1.21 264.21 16.51 0.00 0.00 209721.25 9858.89 404841.52 00:19:12.250 [2024-11-15T10:01:01.134Z] =================================================================================================================== 00:19:12.250 [2024-11-15T10:01:01.134Z] Total : 3552.85 222.05 0.00 0.00 164771.41 4843.97 404841.52 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:12.509 rmmod nvme_rdma 00:19:12.509 rmmod nvme_fabrics 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1475338 ']' 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1475338 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 1475338 ']' 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 1475338 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1475338 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1475338' 00:19:12.509 killing process with pid 1475338 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 1475338 00:19:12.509 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 1475338 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:13.078 00:19:13.078 real 0m11.597s 00:19:13.078 user 0m27.954s 00:19:13.078 sys 0m5.060s 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:13.078 ************************************ 00:19:13.078 END TEST nvmf_shutdown_tc1 00:19:13.078 ************************************ 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:13.078 ************************************ 00:19:13.078 START TEST nvmf_shutdown_tc2 00:19:13.078 ************************************ 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:13.078 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:19:13.079 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:19:13.079 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:19:13.079 Found net devices under 0000:af:00.0: mlx_0_0 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:19:13.079 Found net devices under 0000:af:00.1: mlx_0_1 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # rdma_device_init 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:13.079 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:13.080 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:13.080 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:19:13.080 altname enp175s0f0np0 00:19:13.080 altname ens801f0np0 00:19:13.080 inet 192.168.100.8/24 scope global mlx_0_0 00:19:13.080 valid_lft forever preferred_lft forever 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:13.080 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:13.080 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:19:13.080 altname enp175s0f1np1 00:19:13.080 altname ens801f1np1 00:19:13.080 inet 192.168.100.9/24 scope global mlx_0_1 00:19:13.080 valid_lft forever preferred_lft forever 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:13.080 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:13.341 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:13.341 192.168.100.9' 00:19:13.341 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # head -n 1 00:19:13.341 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:13.342 192.168.100.9' 00:19:13.342 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:13.342 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:13.342 192.168.100.9' 00:19:13.342 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # tail -n +2 00:19:13.342 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # head -n 1 00:19:13.342 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:13.342 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:13.342 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:13.342 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:13.342 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:13.342 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:13.342 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:19:13.342 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:13.342 11:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:13.342 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:13.342 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1476615 00:19:13.342 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1476615 00:19:13.342 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:13.342 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1476615 ']' 00:19:13.342 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.342 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:13.342 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.342 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:13.342 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:13.342 [2024-11-15 11:01:02.053820] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:19:13.342 [2024-11-15 11:01:02.053866] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.342 [2024-11-15 11:01:02.117081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:13.342 [2024-11-15 11:01:02.159791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.342 [2024-11-15 11:01:02.159823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.342 [2024-11-15 11:01:02.159830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:13.342 [2024-11-15 11:01:02.159836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:13.342 [2024-11-15 11:01:02.159841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.342 [2024-11-15 11:01:02.161525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.342 [2024-11-15 11:01:02.161609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:13.342 [2024-11-15 11:01:02.161734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.342 [2024-11-15 11:01:02.161735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:13.602 [2024-11-15 11:01:02.320398] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa11530/0xa15a20) succeed. 00:19:13.602 [2024-11-15 11:01:02.329791] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa12bc0/0xa570c0) succeed. 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:13.602 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:13.861 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:13.861 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:13.861 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:13.861 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:13.861 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:13.861 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:13.861 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:13.861 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:13.861 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:13.861 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:13.861 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:19:13.861 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.861 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:13.861 Malloc1 00:19:13.861 [2024-11-15 11:01:02.551260] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:13.861 Malloc2 00:19:13.861 Malloc3 00:19:13.861 Malloc4 00:19:13.861 Malloc5 00:19:14.120 Malloc6 00:19:14.120 Malloc7 00:19:14.120 Malloc8 00:19:14.120 Malloc9 00:19:14.120 Malloc10 00:19:14.120 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.120 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:19:14.120 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:14.120 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:14.120 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1476740 00:19:14.120 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1476740 /var/tmp/bdevperf.sock 00:19:14.120 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1476740 ']' 00:19:14.120 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.120 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:14.120 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:14.120 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:14.120 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.120 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:14.120 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:19:14.120 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:14.120 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:19:14.120 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:14.120 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:14.120 { 00:19:14.120 "params": { 00:19:14.120 "name": "Nvme$subsystem", 00:19:14.120 "trtype": "$TEST_TRANSPORT", 00:19:14.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.120 "adrfam": "ipv4", 00:19:14.120 "trsvcid": "$NVMF_PORT", 00:19:14.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.120 "hdgst": ${hdgst:-false}, 00:19:14.120 "ddgst": ${ddgst:-false} 00:19:14.121 }, 00:19:14.121 "method": "bdev_nvme_attach_controller" 00:19:14.121 } 00:19:14.121 EOF 00:19:14.121 )") 00:19:14.121 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:14.121 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:14.121 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:14.121 { 00:19:14.121 "params": { 00:19:14.121 "name": "Nvme$subsystem", 00:19:14.121 "trtype": "$TEST_TRANSPORT", 00:19:14.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.121 "adrfam": "ipv4", 00:19:14.121 "trsvcid": "$NVMF_PORT", 00:19:14.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.121 "hdgst": ${hdgst:-false}, 00:19:14.121 "ddgst": ${ddgst:-false} 00:19:14.121 }, 00:19:14.121 "method": "bdev_nvme_attach_controller" 00:19:14.121 } 00:19:14.121 EOF 00:19:14.121 )") 00:19:14.121 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:14.121 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:14.121 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:14.121 { 00:19:14.121 "params": { 00:19:14.121 "name": "Nvme$subsystem", 00:19:14.121 "trtype": "$TEST_TRANSPORT", 00:19:14.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.121 "adrfam": "ipv4", 00:19:14.121 "trsvcid": "$NVMF_PORT", 00:19:14.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.121 "hdgst": ${hdgst:-false}, 00:19:14.121 "ddgst": ${ddgst:-false} 00:19:14.121 }, 00:19:14.121 "method": "bdev_nvme_attach_controller" 00:19:14.121 } 00:19:14.121 EOF 00:19:14.121 )") 00:19:14.121 11:01:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:14.121 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:14.121 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:14.121 { 00:19:14.121 "params": { 00:19:14.121 "name": "Nvme$subsystem", 00:19:14.121 "trtype": "$TEST_TRANSPORT", 00:19:14.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.121 "adrfam": "ipv4", 00:19:14.121 "trsvcid": "$NVMF_PORT", 00:19:14.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.121 "hdgst": ${hdgst:-false}, 00:19:14.121 "ddgst": ${ddgst:-false} 00:19:14.121 }, 00:19:14.121 "method": "bdev_nvme_attach_controller" 00:19:14.121 } 00:19:14.121 EOF 00:19:14.121 )") 00:19:14.380 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:14.380 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:14.380 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:14.380 { 00:19:14.380 "params": { 00:19:14.380 "name": "Nvme$subsystem", 00:19:14.380 "trtype": "$TEST_TRANSPORT", 00:19:14.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.380 "adrfam": "ipv4", 00:19:14.380 "trsvcid": "$NVMF_PORT", 00:19:14.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.380 "hdgst": ${hdgst:-false}, 00:19:14.380 "ddgst": ${ddgst:-false} 00:19:14.380 }, 00:19:14.380 "method": "bdev_nvme_attach_controller" 00:19:14.380 } 00:19:14.380 EOF 00:19:14.380 )") 00:19:14.380 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:14.380 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:14.380 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:14.380 { 00:19:14.380 "params": { 00:19:14.380 "name": "Nvme$subsystem", 00:19:14.380 "trtype": "$TEST_TRANSPORT", 00:19:14.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.380 "adrfam": "ipv4", 00:19:14.380 "trsvcid": "$NVMF_PORT", 00:19:14.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.380 "hdgst": ${hdgst:-false}, 00:19:14.381 "ddgst": ${ddgst:-false} 00:19:14.381 }, 00:19:14.381 "method": "bdev_nvme_attach_controller" 00:19:14.381 } 00:19:14.381 EOF 00:19:14.381 )") 00:19:14.381 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:14.381 [2024-11-15 11:01:03.023924] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:19:14.381 [2024-11-15 11:01:03.023976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476740 ] 00:19:14.381 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:14.381 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:14.381 { 00:19:14.381 "params": { 00:19:14.381 "name": "Nvme$subsystem", 00:19:14.381 "trtype": "$TEST_TRANSPORT", 00:19:14.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.381 "adrfam": "ipv4", 00:19:14.381 "trsvcid": "$NVMF_PORT", 00:19:14.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.381 "hdgst": ${hdgst:-false}, 00:19:14.381 "ddgst": ${ddgst:-false} 00:19:14.381 }, 00:19:14.381 "method": "bdev_nvme_attach_controller" 00:19:14.381 } 00:19:14.381 EOF 00:19:14.381 )") 00:19:14.381 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:14.381 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:14.381 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:14.381 { 00:19:14.381 "params": { 00:19:14.381 "name": "Nvme$subsystem", 00:19:14.381 "trtype": "$TEST_TRANSPORT", 00:19:14.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.381 "adrfam": "ipv4", 00:19:14.381 "trsvcid": "$NVMF_PORT", 00:19:14.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.381 "hdgst": ${hdgst:-false}, 00:19:14.381 "ddgst": ${ddgst:-false} 00:19:14.381 }, 00:19:14.381 "method": "bdev_nvme_attach_controller" 00:19:14.381 } 00:19:14.381 EOF 00:19:14.381 )") 00:19:14.381 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:14.381 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:14.381 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:14.381 { 00:19:14.381 "params": { 00:19:14.381 "name": "Nvme$subsystem", 00:19:14.381 "trtype": "$TEST_TRANSPORT", 00:19:14.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.381 "adrfam": "ipv4", 00:19:14.381 "trsvcid": "$NVMF_PORT", 00:19:14.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.381 "hdgst": ${hdgst:-false}, 00:19:14.381 "ddgst": ${ddgst:-false} 00:19:14.381 }, 00:19:14.381 "method": "bdev_nvme_attach_controller" 00:19:14.381 } 00:19:14.381 EOF 00:19:14.381 )") 00:19:14.381 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:14.381 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:14.381 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:14.381 { 00:19:14.381 "params": { 00:19:14.381 "name": "Nvme$subsystem", 00:19:14.381 "trtype": "$TEST_TRANSPORT", 00:19:14.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.381 "adrfam": "ipv4", 00:19:14.381 "trsvcid": "$NVMF_PORT", 00:19:14.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.381 "hdgst": ${hdgst:-false}, 00:19:14.381 "ddgst": ${ddgst:-false} 00:19:14.381 }, 00:19:14.381 "method": "bdev_nvme_attach_controller" 00:19:14.381 } 00:19:14.381 EOF 00:19:14.381 )") 00:19:14.381 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:14.381 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:19:14.381 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:19:14.381 11:01:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:14.381 "params": { 00:19:14.381 "name": "Nvme1", 00:19:14.381 "trtype": "rdma", 00:19:14.381 "traddr": "192.168.100.8", 00:19:14.381 "adrfam": "ipv4", 00:19:14.381 "trsvcid": "4420", 00:19:14.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.381 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.381 "hdgst": false, 00:19:14.381 "ddgst": false 00:19:14.381 }, 00:19:14.381 "method": "bdev_nvme_attach_controller" 00:19:14.381 },{ 00:19:14.381 "params": { 00:19:14.381 "name": "Nvme2", 00:19:14.381 "trtype": "rdma", 00:19:14.381 "traddr": "192.168.100.8", 00:19:14.381 "adrfam": "ipv4", 00:19:14.381 "trsvcid": "4420", 00:19:14.381 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:14.381 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:14.381 "hdgst": false, 00:19:14.381 "ddgst": false 00:19:14.381 }, 00:19:14.381 "method": "bdev_nvme_attach_controller" 00:19:14.381 },{ 00:19:14.381 "params": { 00:19:14.381 "name": "Nvme3", 00:19:14.381 "trtype": "rdma", 00:19:14.381 "traddr": "192.168.100.8", 00:19:14.381 "adrfam": "ipv4", 00:19:14.381 "trsvcid": "4420", 00:19:14.381 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:14.381 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:14.381 "hdgst": false, 00:19:14.381 "ddgst": false 00:19:14.381 }, 00:19:14.381 "method": "bdev_nvme_attach_controller" 00:19:14.381 },{ 00:19:14.381 "params": { 00:19:14.381 "name": "Nvme4", 00:19:14.381 "trtype": "rdma", 00:19:14.381 "traddr": "192.168.100.8", 00:19:14.381 "adrfam": "ipv4", 00:19:14.381 "trsvcid": "4420", 00:19:14.381 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:14.381 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:14.381 "hdgst": false, 00:19:14.381 "ddgst": false 00:19:14.381 }, 00:19:14.381 "method": "bdev_nvme_attach_controller" 00:19:14.381 },{ 00:19:14.381 "params": { 00:19:14.381 "name": "Nvme5", 00:19:14.381 "trtype": "rdma", 00:19:14.381 "traddr": "192.168.100.8", 00:19:14.381 "adrfam": "ipv4", 00:19:14.381 "trsvcid": "4420", 00:19:14.381 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:14.381 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:14.381 "hdgst": false, 00:19:14.381 "ddgst": false 00:19:14.381 }, 00:19:14.381 "method": "bdev_nvme_attach_controller" 00:19:14.381 },{ 00:19:14.381 "params": { 00:19:14.381 "name": "Nvme6", 00:19:14.381 "trtype": "rdma", 00:19:14.381 "traddr": "192.168.100.8", 00:19:14.381 "adrfam": "ipv4", 00:19:14.381 "trsvcid": "4420", 00:19:14.381 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:14.381 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:14.381 "hdgst": false, 00:19:14.381 "ddgst": false 00:19:14.381 }, 00:19:14.381 "method": "bdev_nvme_attach_controller" 00:19:14.381 },{ 00:19:14.381 "params": { 00:19:14.381 "name": "Nvme7", 00:19:14.381 "trtype": "rdma", 00:19:14.381 "traddr": "192.168.100.8", 00:19:14.381 "adrfam": "ipv4", 00:19:14.381 "trsvcid": "4420", 00:19:14.381 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:14.381 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:14.381 "hdgst": false, 00:19:14.381 "ddgst": false 00:19:14.381 }, 00:19:14.381 "method": "bdev_nvme_attach_controller" 00:19:14.381 },{ 00:19:14.381 "params": { 00:19:14.381 "name": "Nvme8", 00:19:14.381 "trtype": "rdma", 00:19:14.381 "traddr": "192.168.100.8", 00:19:14.381 "adrfam": "ipv4", 00:19:14.381 "trsvcid": "4420", 00:19:14.381 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:14.381 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:14.381 "hdgst": false, 00:19:14.381 "ddgst": false 00:19:14.381 }, 00:19:14.381 "method": "bdev_nvme_attach_controller" 00:19:14.381 },{ 00:19:14.381 "params": { 00:19:14.381 "name": "Nvme9", 00:19:14.381 "trtype": "rdma", 00:19:14.381 "traddr": "192.168.100.8", 00:19:14.381 "adrfam": "ipv4", 00:19:14.381 "trsvcid": "4420", 00:19:14.381 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:14.381 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:14.381 "hdgst": false, 00:19:14.381 "ddgst": false 00:19:14.381 }, 00:19:14.381 "method": "bdev_nvme_attach_controller" 00:19:14.381 },{ 00:19:14.381 "params": { 00:19:14.381 "name": "Nvme10", 00:19:14.381 "trtype": "rdma", 00:19:14.381 "traddr": "192.168.100.8", 00:19:14.381 "adrfam": "ipv4", 00:19:14.381 "trsvcid": "4420", 00:19:14.381 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:14.381 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:14.381 "hdgst": false, 00:19:14.381 "ddgst": false 00:19:14.381 }, 00:19:14.381 "method": "bdev_nvme_attach_controller" 00:19:14.381 }' 00:19:14.381 [2024-11-15 11:01:03.089210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.382 [2024-11-15 11:01:03.130626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.318 Running I/O for 10 seconds... 00:19:15.318 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:15.318 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:19:15.318 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:15.318 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.318 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:15.318 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.318 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:15.318 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:15.318 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:19:15.318 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:19:15.318 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:19:15.318 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:19:15.318 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:15.318 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:15.318 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:15.318 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.318 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:15.576 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.576 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=27 00:19:15.576 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 27 -ge 100 ']' 00:19:15.576 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:19:15.834 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:19:15.834 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:15.834 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:15.834 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:15.834 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.834 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:16.093 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.093 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=191 00:19:16.093 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 191 -ge 100 ']' 00:19:16.093 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:19:16.093 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:19:16.093 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:19:16.093 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1476740 00:19:16.093 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 1476740 ']' 00:19:16.093 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 1476740 00:19:16.093 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:19:16.093 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:16.093 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1476740 00:19:16.093 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:16.093 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:16.093 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1476740' 00:19:16.093 killing process with pid 1476740 00:19:16.093 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 1476740 00:19:16.093 11:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 1476740 00:19:16.093 Received shutdown signal, test time was about 0.889748 seconds 00:19:16.093 00:19:16.093 Latency(us) 00:19:16.093 [2024-11-15T10:01:04.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.093 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:16.093 Verification LBA range: start 0x0 length 0x400 00:19:16.093 Nvme1n1 : 0.87 361.17 22.57 0.00 0.00 173742.00 9573.95 246187.41 00:19:16.093 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:16.093 Verification LBA range: start 0x0 length 0x400 00:19:16.093 Nvme2n1 : 0.88 365.19 22.82 0.00 0.00 168342.84 10200.82 178713.82 00:19:16.093 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:16.093 Verification LBA range: start 0x0 length 0x400 00:19:16.093 Nvme3n1 : 0.88 364.66 22.79 0.00 0.00 165265.23 10485.76 171419.38 00:19:16.093 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:16.093 Verification LBA range: start 0x0 length 0x400 00:19:16.093 Nvme4n1 : 0.88 372.10 23.26 0.00 0.00 158763.90 6382.64 159565.91 00:19:16.093 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:16.093 Verification LBA range: start 0x0 length 0x400 00:19:16.093 Nvme5n1 : 0.88 363.47 22.72 0.00 0.00 159840.17 11340.58 152271.47 00:19:16.093 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:16.093 Verification LBA range: start 0x0 length 0x400 00:19:16.093 Nvme6n1 : 0.88 362.93 22.68 0.00 0.00 156295.61 11796.48 144065.22 00:19:16.093 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:16.093 Verification LBA range: start 0x0 length 0x400 00:19:16.093 Nvme7n1 : 0.88 364.56 22.79 0.00 0.00 152830.11 6411.13 134035.37 00:19:16.093 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:16.093 Verification LBA range: start 0x0 length 0x400 00:19:16.093 Nvme8n1 : 0.88 361.71 22.61 0.00 0.00 150570.65 12822.26 124917.31 00:19:16.093 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:16.093 Verification LBA range: start 0x0 length 0x400 00:19:16.093 Nvme9n1 : 0.89 360.13 22.51 0.00 0.00 148110.58 2920.63 114431.55 00:19:16.093 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:16.093 Verification LBA range: start 0x0 length 0x400 00:19:16.093 Nvme10n1 : 0.87 293.09 18.32 0.00 0.00 178425.32 9175.04 189655.49 00:19:16.093 [2024-11-15T10:01:04.977Z] =================================================================================================================== 00:19:16.093 [2024-11-15T10:01:04.977Z] Total : 3569.02 223.06 0.00 0.00 160841.28 2920.63 246187.41 00:19:16.351 11:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:19:17.286 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1476615 00:19:17.286 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:19:17.286 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:19:17.286 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:17.286 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:17.286 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:19:17.286 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:17.286 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:19:17.545 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:17.545 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:17.545 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:19:17.545 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:17.545 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:17.545 rmmod nvme_rdma 00:19:17.545 rmmod nvme_fabrics 00:19:17.545 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:17.545 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:19:17.545 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:19:17.545 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1476615 ']' 00:19:17.545 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1476615 00:19:17.545 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 1476615 ']' 00:19:17.545 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 1476615 00:19:17.545 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:19:17.545 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:17.545 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1476615 00:19:17.545 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:17.545 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:17.545 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1476615' 00:19:17.545 killing process with pid 1476615 00:19:17.545 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 1476615 00:19:17.545 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 1476615 00:19:18.113 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:18.113 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:18.113 00:19:18.113 real 0m4.923s 00:19:18.113 user 0m19.966s 00:19:18.113 sys 0m1.010s 00:19:18.113 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:18.113 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:18.113 ************************************ 00:19:18.113 END TEST nvmf_shutdown_tc2 00:19:18.113 ************************************ 00:19:18.113 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:18.113 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:19:18.113 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:18.113 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:18.113 ************************************ 00:19:18.113 START TEST nvmf_shutdown_tc3 00:19:18.113 ************************************ 00:19:18.113 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:19:18.113 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:19:18.114 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:19:18.114 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:19:18.114 Found net devices under 0000:af:00.0: mlx_0_0 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:19:18.114 Found net devices under 0000:af:00.1: mlx_0_1 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # rdma_device_init 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:18.114 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:18.115 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:18.115 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:19:18.115 altname enp175s0f0np0 00:19:18.115 altname ens801f0np0 00:19:18.115 inet 192.168.100.8/24 scope global mlx_0_0 00:19:18.115 valid_lft forever preferred_lft forever 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:18.115 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:18.115 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:19:18.115 altname enp175s0f1np1 00:19:18.115 altname ens801f1np1 00:19:18.115 inet 192.168.100.9/24 scope global mlx_0_1 00:19:18.115 valid_lft forever preferred_lft forever 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:18.115 192.168.100.9' 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:18.115 192.168.100.9' 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # head -n 1 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:18.115 192.168.100.9' 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # head -n 1 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # tail -n +2 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:18.115 11:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:18.374 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:19:18.374 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:18.374 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:18.374 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:18.374 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1477546 00:19:18.374 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1477546 00:19:18.374 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:18.374 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 1477546 ']' 00:19:18.374 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.374 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:18.374 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.374 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:18.374 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:18.374 [2024-11-15 11:01:07.063751] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:19:18.374 [2024-11-15 11:01:07.063799] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.374 [2024-11-15 11:01:07.129517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:18.374 [2024-11-15 11:01:07.170749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.374 [2024-11-15 11:01:07.170787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.374 [2024-11-15 11:01:07.170794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.374 [2024-11-15 11:01:07.170800] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.374 [2024-11-15 11:01:07.170805] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.374 [2024-11-15 11:01:07.172512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.374 [2024-11-15 11:01:07.172612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:18.374 [2024-11-15 11:01:07.172720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.374 [2024-11-15 11:01:07.172721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:18.632 [2024-11-15 11:01:07.342578] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x206d530/0x2071a20) succeed. 00:19:18.632 [2024-11-15 11:01:07.351926] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x206ebc0/0x20b30c0) succeed. 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:18.632 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:18.890 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:18.890 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:18.890 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:18.890 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:18.890 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:19:18.890 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.890 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:18.890 Malloc1 00:19:18.890 [2024-11-15 11:01:07.571281] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:18.890 Malloc2 00:19:18.890 Malloc3 00:19:18.890 Malloc4 00:19:18.890 Malloc5 00:19:18.890 Malloc6 00:19:19.149 Malloc7 00:19:19.149 Malloc8 00:19:19.149 Malloc9 00:19:19.149 Malloc10 00:19:19.149 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.149 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:19:19.149 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:19.149 11:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1477839 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1477839 /var/tmp/bdevperf.sock 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 1477839 ']' 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:19.149 { 00:19:19.149 "params": { 00:19:19.149 "name": "Nvme$subsystem", 00:19:19.149 "trtype": "$TEST_TRANSPORT", 00:19:19.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.149 "adrfam": "ipv4", 00:19:19.149 "trsvcid": "$NVMF_PORT", 00:19:19.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.149 "hdgst": ${hdgst:-false}, 00:19:19.149 "ddgst": ${ddgst:-false} 00:19:19.149 }, 00:19:19.149 "method": "bdev_nvme_attach_controller" 00:19:19.149 } 00:19:19.149 EOF 00:19:19.149 )") 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:19.149 { 00:19:19.149 "params": { 00:19:19.149 "name": "Nvme$subsystem", 00:19:19.149 "trtype": "$TEST_TRANSPORT", 00:19:19.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.149 "adrfam": "ipv4", 00:19:19.149 "trsvcid": "$NVMF_PORT", 00:19:19.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.149 "hdgst": ${hdgst:-false}, 00:19:19.149 "ddgst": ${ddgst:-false} 00:19:19.149 }, 00:19:19.149 "method": "bdev_nvme_attach_controller" 00:19:19.149 } 00:19:19.149 EOF 00:19:19.149 )") 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:19.149 { 00:19:19.149 "params": { 00:19:19.149 "name": "Nvme$subsystem", 00:19:19.149 "trtype": "$TEST_TRANSPORT", 00:19:19.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.149 "adrfam": "ipv4", 00:19:19.149 "trsvcid": "$NVMF_PORT", 00:19:19.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.149 "hdgst": ${hdgst:-false}, 00:19:19.149 "ddgst": ${ddgst:-false} 00:19:19.149 }, 00:19:19.149 "method": "bdev_nvme_attach_controller" 00:19:19.149 } 00:19:19.149 EOF 00:19:19.149 )") 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:19.149 { 00:19:19.149 "params": { 00:19:19.149 "name": "Nvme$subsystem", 00:19:19.149 "trtype": "$TEST_TRANSPORT", 00:19:19.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.149 "adrfam": "ipv4", 00:19:19.149 "trsvcid": "$NVMF_PORT", 00:19:19.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.149 "hdgst": ${hdgst:-false}, 00:19:19.149 "ddgst": ${ddgst:-false} 00:19:19.149 }, 00:19:19.149 "method": "bdev_nvme_attach_controller" 00:19:19.149 } 00:19:19.149 EOF 00:19:19.149 )") 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:19.149 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:19.149 { 00:19:19.149 "params": { 00:19:19.149 "name": "Nvme$subsystem", 00:19:19.149 "trtype": "$TEST_TRANSPORT", 00:19:19.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.149 "adrfam": "ipv4", 00:19:19.149 "trsvcid": "$NVMF_PORT", 00:19:19.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.149 "hdgst": ${hdgst:-false}, 00:19:19.149 "ddgst": ${ddgst:-false} 00:19:19.149 }, 00:19:19.149 "method": "bdev_nvme_attach_controller" 00:19:19.149 } 00:19:19.149 EOF 00:19:19.149 )") 00:19:19.409 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:19.409 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:19.409 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:19.409 { 00:19:19.409 "params": { 00:19:19.409 "name": "Nvme$subsystem", 00:19:19.409 "trtype": "$TEST_TRANSPORT", 00:19:19.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.409 "adrfam": "ipv4", 00:19:19.409 "trsvcid": "$NVMF_PORT", 00:19:19.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.409 "hdgst": ${hdgst:-false}, 00:19:19.409 "ddgst": ${ddgst:-false} 00:19:19.409 }, 00:19:19.409 "method": "bdev_nvme_attach_controller" 00:19:19.409 } 00:19:19.409 EOF 00:19:19.409 )") 00:19:19.409 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:19.409 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:19.409 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:19.409 { 00:19:19.409 "params": { 00:19:19.409 "name": "Nvme$subsystem", 00:19:19.409 "trtype": "$TEST_TRANSPORT", 00:19:19.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.409 "adrfam": "ipv4", 00:19:19.409 "trsvcid": "$NVMF_PORT", 00:19:19.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.409 "hdgst": ${hdgst:-false}, 00:19:19.409 "ddgst": ${ddgst:-false} 00:19:19.409 }, 00:19:19.409 "method": "bdev_nvme_attach_controller" 00:19:19.409 } 00:19:19.409 EOF 00:19:19.409 )") 00:19:19.409 [2024-11-15 11:01:08.048194] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:19:19.409 [2024-11-15 11:01:08.048245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477839 ] 00:19:19.409 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:19.409 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:19.409 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:19.409 { 00:19:19.409 "params": { 00:19:19.409 "name": "Nvme$subsystem", 00:19:19.409 "trtype": "$TEST_TRANSPORT", 00:19:19.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.409 "adrfam": "ipv4", 00:19:19.409 "trsvcid": "$NVMF_PORT", 00:19:19.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.409 "hdgst": ${hdgst:-false}, 00:19:19.409 "ddgst": ${ddgst:-false} 00:19:19.409 }, 00:19:19.409 "method": "bdev_nvme_attach_controller" 00:19:19.409 } 00:19:19.409 EOF 00:19:19.409 )") 00:19:19.409 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:19.409 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:19.409 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:19.409 { 00:19:19.409 "params": { 00:19:19.409 "name": "Nvme$subsystem", 00:19:19.409 "trtype": "$TEST_TRANSPORT", 00:19:19.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.409 "adrfam": "ipv4", 00:19:19.409 "trsvcid": "$NVMF_PORT", 00:19:19.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.409 "hdgst": ${hdgst:-false}, 00:19:19.409 "ddgst": ${ddgst:-false} 00:19:19.409 }, 00:19:19.409 "method": "bdev_nvme_attach_controller" 00:19:19.409 } 00:19:19.409 EOF 00:19:19.409 )") 00:19:19.409 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:19.409 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:19.409 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:19.409 { 00:19:19.409 "params": { 00:19:19.409 "name": "Nvme$subsystem", 00:19:19.409 "trtype": "$TEST_TRANSPORT", 00:19:19.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.409 "adrfam": "ipv4", 00:19:19.409 "trsvcid": "$NVMF_PORT", 00:19:19.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.409 "hdgst": ${hdgst:-false}, 00:19:19.409 "ddgst": ${ddgst:-false} 00:19:19.409 }, 00:19:19.409 "method": "bdev_nvme_attach_controller" 00:19:19.409 } 00:19:19.409 EOF 00:19:19.409 )") 00:19:19.409 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:19.409 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:19:19.409 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:19:19.409 11:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:19.409 "params": { 00:19:19.409 "name": "Nvme1", 00:19:19.409 "trtype": "rdma", 00:19:19.409 "traddr": "192.168.100.8", 00:19:19.409 "adrfam": "ipv4", 00:19:19.409 "trsvcid": "4420", 00:19:19.409 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.409 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:19.409 "hdgst": false, 00:19:19.409 "ddgst": false 00:19:19.409 }, 00:19:19.409 "method": "bdev_nvme_attach_controller" 00:19:19.409 },{ 00:19:19.409 "params": { 00:19:19.409 "name": "Nvme2", 00:19:19.409 "trtype": "rdma", 00:19:19.409 "traddr": "192.168.100.8", 00:19:19.409 "adrfam": "ipv4", 00:19:19.409 "trsvcid": "4420", 00:19:19.409 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:19.409 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:19.409 "hdgst": false, 00:19:19.409 "ddgst": false 00:19:19.409 }, 00:19:19.409 "method": "bdev_nvme_attach_controller" 00:19:19.409 },{ 00:19:19.409 "params": { 00:19:19.409 "name": "Nvme3", 00:19:19.409 "trtype": "rdma", 00:19:19.409 "traddr": "192.168.100.8", 00:19:19.409 "adrfam": "ipv4", 00:19:19.409 "trsvcid": "4420", 00:19:19.409 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:19.409 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:19.409 "hdgst": false, 00:19:19.409 "ddgst": false 00:19:19.409 }, 00:19:19.409 "method": "bdev_nvme_attach_controller" 00:19:19.409 },{ 00:19:19.409 "params": { 00:19:19.409 "name": "Nvme4", 00:19:19.409 "trtype": "rdma", 00:19:19.409 "traddr": "192.168.100.8", 00:19:19.409 "adrfam": "ipv4", 00:19:19.409 "trsvcid": "4420", 00:19:19.409 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:19.409 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:19.409 "hdgst": false, 00:19:19.409 "ddgst": false 00:19:19.409 }, 00:19:19.409 "method": "bdev_nvme_attach_controller" 00:19:19.409 },{ 00:19:19.409 "params": { 00:19:19.409 "name": "Nvme5", 00:19:19.409 "trtype": "rdma", 00:19:19.409 "traddr": "192.168.100.8", 00:19:19.409 "adrfam": "ipv4", 00:19:19.409 "trsvcid": "4420", 00:19:19.409 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:19.410 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:19.410 "hdgst": false, 00:19:19.410 "ddgst": false 00:19:19.410 }, 00:19:19.410 "method": "bdev_nvme_attach_controller" 00:19:19.410 },{ 00:19:19.410 "params": { 00:19:19.410 "name": "Nvme6", 00:19:19.410 "trtype": "rdma", 00:19:19.410 "traddr": "192.168.100.8", 00:19:19.410 "adrfam": "ipv4", 00:19:19.410 "trsvcid": "4420", 00:19:19.410 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:19.410 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:19.410 "hdgst": false, 00:19:19.410 "ddgst": false 00:19:19.410 }, 00:19:19.410 "method": "bdev_nvme_attach_controller" 00:19:19.410 },{ 00:19:19.410 "params": { 00:19:19.410 "name": "Nvme7", 00:19:19.410 "trtype": "rdma", 00:19:19.410 "traddr": "192.168.100.8", 00:19:19.410 "adrfam": "ipv4", 00:19:19.410 "trsvcid": "4420", 00:19:19.410 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:19.410 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:19.410 "hdgst": false, 00:19:19.410 "ddgst": false 00:19:19.410 }, 00:19:19.410 "method": "bdev_nvme_attach_controller" 00:19:19.410 },{ 00:19:19.410 "params": { 00:19:19.410 "name": "Nvme8", 00:19:19.410 "trtype": "rdma", 00:19:19.410 "traddr": "192.168.100.8", 00:19:19.410 "adrfam": "ipv4", 00:19:19.410 "trsvcid": "4420", 00:19:19.410 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:19.410 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:19.410 "hdgst": false, 00:19:19.410 "ddgst": false 00:19:19.410 }, 00:19:19.410 "method": "bdev_nvme_attach_controller" 00:19:19.410 },{ 00:19:19.410 "params": { 00:19:19.410 "name": "Nvme9", 00:19:19.410 "trtype": "rdma", 00:19:19.410 "traddr": "192.168.100.8", 00:19:19.410 "adrfam": "ipv4", 00:19:19.410 "trsvcid": "4420", 00:19:19.410 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:19.410 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:19.410 "hdgst": false, 00:19:19.410 "ddgst": false 00:19:19.410 }, 00:19:19.410 "method": "bdev_nvme_attach_controller" 00:19:19.410 },{ 00:19:19.410 "params": { 00:19:19.410 "name": "Nvme10", 00:19:19.410 "trtype": "rdma", 00:19:19.410 "traddr": "192.168.100.8", 00:19:19.410 "adrfam": "ipv4", 00:19:19.410 "trsvcid": "4420", 00:19:19.410 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:19.410 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:19.410 "hdgst": false, 00:19:19.410 "ddgst": false 00:19:19.410 }, 00:19:19.410 "method": "bdev_nvme_attach_controller" 00:19:19.410 }' 00:19:19.410 [2024-11-15 11:01:08.112269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.410 [2024-11-15 11:01:08.153903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.345 Running I/O for 10 seconds... 00:19:20.345 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:20.345 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:19:20.345 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:20.345 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.345 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:20.345 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.345 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:20.345 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:20.345 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:20.345 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:19:20.345 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:19:20.345 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:19:20.345 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:19:20.345 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:20.345 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:20.345 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:20.345 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.345 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:20.603 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.603 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:19:20.603 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:19:20.603 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:19:20.862 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:19:20.862 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:20.862 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:20.862 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:20.862 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.862 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:21.121 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.121 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=155 00:19:21.121 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 155 -ge 100 ']' 00:19:21.121 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:19:21.122 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:19:21.122 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:19:21.122 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1477546 00:19:21.122 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 1477546 ']' 00:19:21.122 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 1477546 00:19:21.122 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:19:21.122 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:21.122 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1477546 00:19:21.122 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:21.122 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:21.122 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1477546' 00:19:21.122 killing process with pid 1477546 00:19:21.122 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 1477546 00:19:21.122 11:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 1477546 00:19:21.638 2577.00 IOPS, 161.06 MiB/s [2024-11-15T10:01:10.522Z] 11:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:19:22.212 [2024-11-15 11:01:10.883364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.212 [2024-11-15 11:01:10.883402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:19:22.212 [2024-11-15 11:01:10.883413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.212 [2024-11-15 11:01:10.883420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:19:22.212 [2024-11-15 11:01:10.883444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.212 [2024-11-15 11:01:10.883451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:19:22.212 [2024-11-15 11:01:10.883458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.212 [2024-11-15 11:01:10.883465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:19:22.212 [2024-11-15 11:01:10.885194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:22.212 [2024-11-15 11:01:10.885245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:19:22.212 [2024-11-15 11:01:10.885304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.212 [2024-11-15 11:01:10.885329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.212 [2024-11-15 11:01:10.885338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.212 [2024-11-15 11:01:10.885345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.212 [2024-11-15 11:01:10.885352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.212 [2024-11-15 11:01:10.885359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.212 [2024-11-15 11:01:10.885366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.212 [2024-11-15 11:01:10.885373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.212 [2024-11-15 11:01:10.886717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:22.212 [2024-11-15 11:01:10.886751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:19:22.212 [2024-11-15 11:01:10.886792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.212 [2024-11-15 11:01:10.886816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.212 [2024-11-15 11:01:10.886840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.886869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.886893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.886914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.886937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.886958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.888362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:22.213 [2024-11-15 11:01:10.888405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:19:22.213 [2024-11-15 11:01:10.888425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.888435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.888446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.888455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.888466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.888475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.888486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.888495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.890176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:22.213 [2024-11-15 11:01:10.890208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:19:22.213 [2024-11-15 11:01:10.890252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.890275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.890298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.890319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.890343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.890363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.890387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.890407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.892212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:22.213 [2024-11-15 11:01:10.892252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:19:22.213 [2024-11-15 11:01:10.892290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.892324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.892335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.892346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.892356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.892365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.892374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.892383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.893954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:22.213 [2024-11-15 11:01:10.893986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:19:22.213 [2024-11-15 11:01:10.894027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.894051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.894075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.894096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.894118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.894138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.894161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.894197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.896332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:22.213 [2024-11-15 11:01:10.896365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:22.213 [2024-11-15 11:01:10.896407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.896432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.896455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.896476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.896506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.896526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.896549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.896571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.898070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:22.213 [2024-11-15 11:01:10.898102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:19:22.213 [2024-11-15 11:01:10.898140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.898218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.898253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.898263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.898273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.898282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.898292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.898301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.899836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:22.213 [2024-11-15 11:01:10.899867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:19:22.213 [2024-11-15 11:01:10.899908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.899933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.899955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.899977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.899999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.900020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.900043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.213 [2024-11-15 11:01:10.900064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32555 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:19:22.213 [2024-11-15 11:01:10.901523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:22.213 [2024-11-15 11:01:10.901562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:19:22.213 [2024-11-15 11:01:10.903294] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:19:22.214 [2024-11-15 11:01:10.904801] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:19:22.214 [2024-11-15 11:01:10.906278] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:19:22.214 [2024-11-15 11:01:10.907766] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:19:22.214 [2024-11-15 11:01:10.909407] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:19:22.214 [2024-11-15 11:01:10.911121] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:19:22.214 [2024-11-15 11:01:10.911266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100221f180 len:0x10000 key:0x189f00 00:19:22.214 [2024-11-15 11:01:10.911297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.911354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100220f100 len:0x10000 key:0x189f00 00:19:22.214 [2024-11-15 11:01:10.911379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.911415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025f0000 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.911439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.911491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025dff80 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.911502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.911518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025cff00 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.911529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.911544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025bfe80 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.911554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.911569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010025afe00 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.911579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.911594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100259fd80 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.911604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.911623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100258fd00 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.911633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.911649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100257fc80 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.911659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.911674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100256fc00 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.911683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.911699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100255fb80 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.911709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.911724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100254fb00 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.911734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.911750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100253fa80 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.911760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.911775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100252fa00 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.911785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.911800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100251f980 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.911810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.911825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100250f900 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.911835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.911850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024ff880 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.911859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.911875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024ef800 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.911884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.911899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024df780 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.911912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.911927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024cf700 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.911937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.911952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024bf680 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.911963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.911978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024af600 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.911989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.912004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100249f580 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.912014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.912029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100248f500 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.912039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.912054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100247f480 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.912064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.912079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100246f400 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.912089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.912104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100245f380 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.912114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.912129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100244f300 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.912139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.912154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100243f280 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.912169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.912185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100242f200 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.912198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.912213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100241f180 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.912223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.912238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100240f100 len:0x10000 key:0x18a000 00:19:22.214 [2024-11-15 11:01:10.912249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.214 [2024-11-15 11:01:10.912264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010027f0000 len:0x10000 key:0x18ab00 00:19:22.215 [2024-11-15 11:01:10.912274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010027dff80 len:0x10000 key:0x18ab00 00:19:22.215 [2024-11-15 11:01:10.912300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010027cff00 len:0x10000 key:0x18ab00 00:19:22.215 [2024-11-15 11:01:10.912325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010027bfe80 len:0x10000 key:0x18ab00 00:19:22.215 [2024-11-15 11:01:10.912350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010027afe00 len:0x10000 key:0x18ab00 00:19:22.215 [2024-11-15 11:01:10.912375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100279fd80 len:0x10000 key:0x18ab00 00:19:22.215 [2024-11-15 11:01:10.912400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100278fd00 len:0x10000 key:0x18ab00 00:19:22.215 [2024-11-15 11:01:10.912426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100277fc80 len:0x10000 key:0x18ab00 00:19:22.215 [2024-11-15 11:01:10.912451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100276fc00 len:0x10000 key:0x18ab00 00:19:22.215 [2024-11-15 11:01:10.912476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100275fb80 len:0x10000 key:0x18ab00 00:19:22.215 [2024-11-15 11:01:10.912505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100274fb00 len:0x10000 key:0x18ab00 00:19:22.215 [2024-11-15 11:01:10.912530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100273fa80 len:0x10000 key:0x18ab00 00:19:22.215 [2024-11-15 11:01:10.912556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100272fa00 len:0x10000 key:0x18ab00 00:19:22.215 [2024-11-15 11:01:10.912580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100271f980 len:0x10000 key:0x18ab00 00:19:22.215 [2024-11-15 11:01:10.912606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100232fa00 len:0x10000 key:0x189f00 00:19:22.215 [2024-11-15 11:01:10.912632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed82000 len:0x10000 key:0x18a600 00:19:22.215 [2024-11-15 11:01:10.912658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eda3000 len:0x10000 key:0x18a600 00:19:22.215 [2024-11-15 11:01:10.912687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c610000 len:0x10000 key:0x18a600 00:19:22.215 [2024-11-15 11:01:10.912713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c631000 len:0x10000 key:0x18a600 00:19:22.215 [2024-11-15 11:01:10.912740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000edc4000 len:0x10000 key:0x18a600 00:19:22.215 [2024-11-15 11:01:10.912767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ede5000 len:0x10000 key:0x18a600 00:19:22.215 [2024-11-15 11:01:10.912795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ee06000 len:0x10000 key:0x18a600 00:19:22.215 [2024-11-15 11:01:10.912821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ee27000 len:0x10000 key:0x18a600 00:19:22.215 [2024-11-15 11:01:10.912848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ee48000 len:0x10000 key:0x18a600 00:19:22.215 [2024-11-15 11:01:10.912874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ee69000 len:0x10000 key:0x18a600 00:19:22.215 [2024-11-15 11:01:10.912900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ee8a000 len:0x10000 key:0x18a600 00:19:22.215 [2024-11-15 11:01:10.912926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eeab000 len:0x10000 key:0x18a600 00:19:22.215 [2024-11-15 11:01:10.912953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eecc000 len:0x10000 key:0x18a600 00:19:22.215 [2024-11-15 11:01:10.912979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.912996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eeed000 len:0x10000 key:0x18a600 00:19:22.215 [2024-11-15 11:01:10.913006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.913022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ef0e000 len:0x10000 key:0x18a600 00:19:22.215 [2024-11-15 11:01:10.913032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.913048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ef2f000 len:0x10000 key:0x18a600 00:19:22.215 [2024-11-15 11:01:10.913058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.915426] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:19:22.215 [2024-11-15 11:01:10.915455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100267f480 len:0x10000 key:0x18ab00 00:19:22.215 [2024-11-15 11:01:10.915466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.915488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100266f400 len:0x10000 key:0x18ab00 00:19:22.215 [2024-11-15 11:01:10.915498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.215 [2024-11-15 11:01:10.915515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100265f380 len:0x10000 key:0x18ab00 00:19:22.216 [2024-11-15 11:01:10.915526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.915541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100264f300 len:0x10000 key:0x18ab00 00:19:22.216 [2024-11-15 11:01:10.915551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.915567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100263f280 len:0x10000 key:0x18ab00 00:19:22.216 [2024-11-15 11:01:10.915578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.915593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100262f200 len:0x10000 key:0x18ab00 00:19:22.216 [2024-11-15 11:01:10.915604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.915619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100261f180 len:0x10000 key:0x18ab00 00:19:22.216 [2024-11-15 11:01:10.915629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.915644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100260f100 len:0x10000 key:0x18ab00 00:19:22.216 [2024-11-15 11:01:10.915657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.915672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029f0000 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.915682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.915698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029dff80 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.915708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.915723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029cff00 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.915733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.915751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029bfe80 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.915762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.915777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010029afe00 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.915787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.915802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100299fd80 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.915813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.915828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100298fd00 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.915838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.915853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100297fc80 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.915864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.915879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100296fc00 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.915889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.915904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100295fb80 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.915914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.915929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100294fb00 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.915939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.915954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100293fa80 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.915964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.915979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100292fa00 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.915989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.916004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100291f980 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.916014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.916030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100290f900 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.916042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.916057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028ff880 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.916067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.916082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028ef800 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.916093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.916108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028df780 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.916118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.916133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028cf700 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.916143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.916158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028bf680 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.916175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.916190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028af600 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.916200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.916216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100289f580 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.916226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.916241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100288f500 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.916252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.916267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100287f480 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.916277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.916292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100286f400 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.916303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.916318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100285f380 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.916333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.916349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100284f300 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.916359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.916374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100283f280 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.916385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.916400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100282f200 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.916410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.916425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100281f180 len:0x10000 key:0x189800 00:19:22.216 [2024-11-15 11:01:10.916435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.216 [2024-11-15 11:01:10.916451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100280f100 len:0x10000 key:0x189800 00:19:22.217 [2024-11-15 11:01:10.916460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.916476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bf0000 len:0x10000 key:0x18ac00 00:19:22.217 [2024-11-15 11:01:10.916489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.916504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bdff80 len:0x10000 key:0x18ac00 00:19:22.217 [2024-11-15 11:01:10.916514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.916532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bcff00 len:0x10000 key:0x18ac00 00:19:22.217 [2024-11-15 11:01:10.916542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.916557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bbfe80 len:0x10000 key:0x18ac00 00:19:22.217 [2024-11-15 11:01:10.916567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.916582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bafe00 len:0x10000 key:0x18ac00 00:19:22.217 [2024-11-15 11:01:10.916592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.916608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b9fd80 len:0x10000 key:0x18ac00 00:19:22.217 [2024-11-15 11:01:10.916617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.916635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b8fd00 len:0x10000 key:0x18ac00 00:19:22.217 [2024-11-15 11:01:10.916645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.916661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b7fc80 len:0x10000 key:0x18ac00 00:19:22.217 [2024-11-15 11:01:10.916673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.916688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b6fc00 len:0x10000 key:0x18ac00 00:19:22.217 [2024-11-15 11:01:10.916698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.916713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b5fb80 len:0x10000 key:0x18ac00 00:19:22.217 [2024-11-15 11:01:10.916723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.916738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b4fb00 len:0x10000 key:0x18ac00 00:19:22.217 [2024-11-15 11:01:10.916748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.916764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b3fa80 len:0x10000 key:0x18ac00 00:19:22.217 [2024-11-15 11:01:10.916773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.916789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b2fa00 len:0x10000 key:0x18ac00 00:19:22.217 [2024-11-15 11:01:10.916799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.916814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b1f980 len:0x10000 key:0x18ac00 00:19:22.217 [2024-11-15 11:01:10.916824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.916839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b0f900 len:0x10000 key:0x18ac00 00:19:22.217 [2024-11-15 11:01:10.916849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.916864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aff880 len:0x10000 key:0x18ac00 00:19:22.217 [2024-11-15 11:01:10.916874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.916889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100270f900 len:0x10000 key:0x18ab00 00:19:22.217 [2024-11-15 11:01:10.916899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.916917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc1f000 len:0x10000 key:0x18a600 00:19:22.217 [2024-11-15 11:01:10.916927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.916944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbfe000 len:0x10000 key:0x18a600 00:19:22.217 [2024-11-15 11:01:10.916955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.916971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbdd000 len:0x10000 key:0x18a600 00:19:22.217 [2024-11-15 11:01:10.916981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.916997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbbc000 len:0x10000 key:0x18a600 00:19:22.217 [2024-11-15 11:01:10.917007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.917023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb9b000 len:0x10000 key:0x18a600 00:19:22.217 [2024-11-15 11:01:10.917033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.917049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb7a000 len:0x10000 key:0x18a600 00:19:22.217 [2024-11-15 11:01:10.917059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.917075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb59000 len:0x10000 key:0x18a600 00:19:22.217 [2024-11-15 11:01:10.917086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.917103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb38000 len:0x10000 key:0x18a600 00:19:22.217 [2024-11-15 11:01:10.917112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.919364] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:19:22.217 [2024-11-15 11:01:10.919389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002adf780 len:0x10000 key:0x18ac00 00:19:22.217 [2024-11-15 11:01:10.919400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.919420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002acf700 len:0x10000 key:0x18ac00 00:19:22.217 [2024-11-15 11:01:10.919431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.919447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002abf680 len:0x10000 key:0x18ac00 00:19:22.217 [2024-11-15 11:01:10.919457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.217 [2024-11-15 11:01:10.919476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aaf600 len:0x10000 key:0x18ac00 00:19:22.218 [2024-11-15 11:01:10.919487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.919502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a9f580 len:0x10000 key:0x18ac00 00:19:22.218 [2024-11-15 11:01:10.919514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.919529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a8f500 len:0x10000 key:0x18ac00 00:19:22.218 [2024-11-15 11:01:10.919539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.919554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a7f480 len:0x10000 key:0x18ac00 00:19:22.218 [2024-11-15 11:01:10.919565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.919581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a6f400 len:0x10000 key:0x18ac00 00:19:22.218 [2024-11-15 11:01:10.919592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.919609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a5f380 len:0x10000 key:0x18ac00 00:19:22.218 [2024-11-15 11:01:10.919620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.919635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a4f300 len:0x10000 key:0x18ac00 00:19:22.218 [2024-11-15 11:01:10.919646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.919662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a3f280 len:0x10000 key:0x18ac00 00:19:22.218 [2024-11-15 11:01:10.919672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.919687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a2f200 len:0x10000 key:0x18ac00 00:19:22.218 [2024-11-15 11:01:10.919697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.919712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a1f180 len:0x10000 key:0x18ac00 00:19:22.218 [2024-11-15 11:01:10.919722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.919738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a0f100 len:0x10000 key:0x18ac00 00:19:22.218 [2024-11-15 11:01:10.919748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.919765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002df0000 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.919776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.919791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ddff80 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.919801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.919816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dcff00 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.919827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.919842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dbfe80 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.919852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.919866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dafe00 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.919877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.919892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d9fd80 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.919902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.919918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d8fd00 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.919929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.919944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d7fc80 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.919954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.919970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d6fc00 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.919980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.919995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d5fb80 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.920005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.920020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d4fb00 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.920031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.920045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d3fa80 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.920057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.920073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d2fa00 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.920083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.920097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d1f980 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.920107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.920123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d0f900 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.920133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.920148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cff880 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.920158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.920179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cef800 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.920189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.920204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cdf780 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.920214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.920229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ccf700 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.920239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.920254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cbf680 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.920264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.920280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002caf600 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.920290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.920305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c9f580 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.920315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.920330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c8f500 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.920342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.920357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c7f480 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.920367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.920382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c6f400 len:0x10000 key:0x189b00 00:19:22.218 [2024-11-15 11:01:10.920393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.218 [2024-11-15 11:01:10.920407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c5f380 len:0x10000 key:0x189b00 00:19:22.219 [2024-11-15 11:01:10.920418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c4f300 len:0x10000 key:0x189b00 00:19:22.219 [2024-11-15 11:01:10.920445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c3f280 len:0x10000 key:0x189b00 00:19:22.219 [2024-11-15 11:01:10.920469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c2f200 len:0x10000 key:0x189b00 00:19:22.219 [2024-11-15 11:01:10.920495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c1f180 len:0x10000 key:0x189b00 00:19:22.219 [2024-11-15 11:01:10.920520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c0f100 len:0x10000 key:0x189b00 00:19:22.219 [2024-11-15 11:01:10.920545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ff0000 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.920570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fdff80 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.920595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fcff00 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.920623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fbfe80 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.920648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fafe00 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.920673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f9fd80 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.920698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f8fd00 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.920723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f7fc80 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.920748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f6fc00 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.920773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f5fb80 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.920798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f4fb00 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.920823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f3fa80 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.920848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f2fa00 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.920872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f1f980 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.920897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f0f900 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.920925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eff880 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.920950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eef800 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.920975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.920991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002edf780 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.921001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.921015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aef800 len:0x10000 key:0x18ac00 00:19:22.219 [2024-11-15 11:01:10.921025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.923405] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:19:22.219 [2024-11-15 11:01:10.923431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ebf680 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.923442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.923468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eaf600 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.923479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.923495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e9f580 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.923505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.923521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e8f500 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.923531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.923546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e7f480 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.923556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.923572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e6f400 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.923587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.923602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e5f380 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.923613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.923629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e4f300 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.923639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.923655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e3f280 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.923665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.923680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e2f200 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.923690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.219 [2024-11-15 11:01:10.923705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e1f180 len:0x10000 key:0x18a400 00:19:22.219 [2024-11-15 11:01:10.923715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.923730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e0f100 len:0x10000 key:0x18a400 00:19:22.220 [2024-11-15 11:01:10.923740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.923755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031f0000 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.923765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.923780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031dff80 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.923791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.923805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031cff00 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.923816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.923831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031bfe80 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.923841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.923856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031afe00 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.923867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.923884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100319fd80 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.923894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.923909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100318fd00 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.923920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.923935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100317fc80 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.923945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.923960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100316fc00 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.923970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.923985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100315fb80 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.923995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.924010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100314fb00 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.924021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.924036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100313fa80 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.924046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.924061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100312fa00 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.924071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.924086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100311f980 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.924096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.924111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100310f900 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.924121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.924136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ff880 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.924146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.924168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ef800 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.924179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.924194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030df780 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.924204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.924219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030cf700 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.924229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.924244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030bf680 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.924254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.924269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030af600 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.924279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.931395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100309f580 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.931425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.931443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100308f500 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.931454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.931471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100307f480 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.931481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.931497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100306f400 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.931507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.931523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100305f380 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.931533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.931548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100304f300 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.931558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.931574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100303f280 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.931589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.931604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100302f200 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.931615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.931630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100301f180 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.931640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.931655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100300f100 len:0x10000 key:0x18a700 00:19:22.220 [2024-11-15 11:01:10.931665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.931680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033f0000 len:0x10000 key:0x18aa00 00:19:22.220 [2024-11-15 11:01:10.931690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.931705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033dff80 len:0x10000 key:0x18aa00 00:19:22.220 [2024-11-15 11:01:10.931716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.931732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033cff00 len:0x10000 key:0x18aa00 00:19:22.220 [2024-11-15 11:01:10.931742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.220 [2024-11-15 11:01:10.931757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033bfe80 len:0x10000 key:0x18aa00 00:19:22.220 [2024-11-15 11:01:10.931767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.221 [2024-11-15 11:01:10.931782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033afe00 len:0x10000 key:0x18aa00 00:19:22.221 [2024-11-15 11:01:10.931792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.221 [2024-11-15 11:01:10.931807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100339fd80 len:0x10000 key:0x18aa00 00:19:22.221 [2024-11-15 11:01:10.931817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.221 [2024-11-15 11:01:10.931832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100338fd00 len:0x10000 key:0x18aa00 00:19:22.221 [2024-11-15 11:01:10.931842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.221 [2024-11-15 11:01:10.931858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100337fc80 len:0x10000 key:0x18aa00 00:19:22.221 [2024-11-15 11:01:10.931870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.221 [2024-11-15 11:01:10.931886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100336fc00 len:0x10000 key:0x18aa00 00:19:22.221 [2024-11-15 11:01:10.931896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.221 [2024-11-15 11:01:10.931911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100335fb80 len:0x10000 key:0x18aa00 00:19:22.221 [2024-11-15 11:01:10.931921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.221 [2024-11-15 11:01:10.931936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100334fb00 len:0x10000 key:0x18aa00 00:19:22.221 [2024-11-15 11:01:10.931947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.221 [2024-11-15 11:01:10.931962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100333fa80 len:0x10000 key:0x18aa00 00:19:22.221 [2024-11-15 11:01:10.931972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.221 [2024-11-15 11:01:10.931987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100332fa00 len:0x10000 key:0x18aa00 00:19:22.221 [2024-11-15 11:01:10.931997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.221 [2024-11-15 11:01:10.932012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100331f980 len:0x10000 key:0x18aa00 00:19:22.221 [2024-11-15 11:01:10.932022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.221 [2024-11-15 11:01:10.932037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100330f900 len:0x10000 key:0x18aa00 00:19:22.221 [2024-11-15 11:01:10.932047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.221 [2024-11-15 11:01:10.932062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ff880 len:0x10000 key:0x18aa00 00:19:22.221 [2024-11-15 11:01:10.932072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.221 [2024-11-15 11:01:10.932088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ef800 len:0x10000 key:0x18aa00 00:19:22.221 [2024-11-15 11:01:10.932097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.221 [2024-11-15 11:01:10.932112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032df780 len:0x10000 key:0x18aa00 00:19:22.221 [2024-11-15 11:01:10.932122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.221 [2024-11-15 11:01:10.932137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032cf700 len:0x10000 key:0x18aa00 00:19:22.221 [2024-11-15 11:01:10.932147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.221 [2024-11-15 11:01:10.932176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032bf680 len:0x10000 key:0x18aa00 00:19:22.221 [2024-11-15 11:01:10.932188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.221 [2024-11-15 11:01:10.932204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ecf700 len:0x10000 key:0x18a400 00:19:22.221 [2024-11-15 11:01:10.932214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:cbd63000 sqhd:7210 p:0 m:0 dnr:0 00:19:22.221 [2024-11-15 11:01:10.953143] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:19:22.221 [2024-11-15 11:01:10.953281] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:19:22.221 [2024-11-15 11:01:10.953296] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:19:22.221 [2024-11-15 11:01:10.953308] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:19:22.221 [2024-11-15 11:01:10.953317] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:19:22.221 [2024-11-15 11:01:10.953327] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:19:22.221 [2024-11-15 11:01:10.953337] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:19:22.221 [2024-11-15 11:01:10.953346] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:19:22.221 [2024-11-15 11:01:10.953356] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:19:22.221 [2024-11-15 11:01:10.953367] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:19:22.221 [2024-11-15 11:01:10.953375] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:19:22.221 [2024-11-15 11:01:10.958284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:22.221 [2024-11-15 11:01:10.959252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:19:22.221 [2024-11-15 11:01:10.959270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:19:22.221 [2024-11-15 11:01:10.959281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:19:22.221 [2024-11-15 11:01:10.959291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:19:22.221 [2024-11-15 11:01:10.960663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:19:22.221 task offset: 35840 on job bdev=Nvme1n1 fails 00:19:22.221 00:19:22.221 Latency(us) 00:19:22.221 [2024-11-15T10:01:11.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.221 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.221 Job: Nvme1n1 ended in about 1.91 seconds with error 00:19:22.221 Verification LBA range: start 0x0 length 0x400 00:19:22.221 Nvme1n1 : 1.91 133.74 8.36 33.44 0.00 379912.55 37384.01 1050399.61 00:19:22.221 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.221 Job: Nvme2n1 ended in about 1.91 seconds with error 00:19:22.221 Verification LBA range: start 0x0 length 0x400 00:19:22.221 Nvme2n1 : 1.91 133.68 8.36 33.42 0.00 376654.98 40119.43 1050399.61 00:19:22.221 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.221 Job: Nvme3n1 ended in about 1.92 seconds with error 00:19:22.221 Verification LBA range: start 0x0 length 0x400 00:19:22.221 Nvme3n1 : 1.92 143.54 8.97 33.41 0.00 352404.61 4900.95 1050399.61 00:19:22.221 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.221 Job: Nvme4n1 ended in about 1.92 seconds with error 00:19:22.221 Verification LBA range: start 0x0 length 0x400 00:19:22.221 Nvme4n1 : 1.92 145.57 9.10 33.39 0.00 345266.68 4530.53 1050399.61 00:19:22.221 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.221 Job: Nvme5n1 ended in about 1.92 seconds with error 00:19:22.221 Verification LBA range: start 0x0 length 0x400 00:19:22.221 Nvme5n1 : 1.92 137.69 8.61 33.38 0.00 357904.83 13392.14 1050399.61 00:19:22.221 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.221 Job: Nvme6n1 ended in about 1.92 seconds with error 00:19:22.221 Verification LBA range: start 0x0 length 0x400 00:19:22.221 Nvme6n1 : 1.92 149.62 9.35 33.37 0.00 331502.73 16298.52 1050399.61 00:19:22.221 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.221 Job: Nvme7n1 ended in about 1.87 seconds with error 00:19:22.221 Verification LBA range: start 0x0 length 0x400 00:19:22.222 Nvme7n1 : 1.87 136.63 8.54 34.16 0.00 352364.99 24162.84 1094166.26 00:19:22.222 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.222 Job: Nvme8n1 ended in about 1.88 seconds with error 00:19:22.222 Verification LBA range: start 0x0 length 0x400 00:19:22.222 Nvme8n1 : 1.88 136.33 8.52 34.08 0.00 350009.25 28379.94 1086871.82 00:19:22.222 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.222 Job: Nvme9n1 ended in about 1.88 seconds with error 00:19:22.222 Verification LBA range: start 0x0 length 0x400 00:19:22.222 Nvme9n1 : 1.88 136.05 8.50 34.01 0.00 347501.08 54480.36 1072282.94 00:19:22.222 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:22.222 Job: Nvme10n1 ended in about 1.89 seconds with error 00:19:22.222 Verification LBA range: start 0x0 length 0x400 00:19:22.222 Nvme10n1 : 1.89 101.44 6.34 33.81 0.00 432063.89 54936.26 1064988.49 00:19:22.222 [2024-11-15T10:01:11.106Z] =================================================================================================================== 00:19:22.222 [2024-11-15T10:01:11.106Z] Total : 1354.30 84.64 336.46 0.00 360681.36 4530.53 1094166.26 00:19:22.222 [2024-11-15 11:01:10.987359] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:22.222 [2024-11-15 11:01:10.987381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:19:22.222 [2024-11-15 11:01:10.987394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:19:22.222 [2024-11-15 11:01:10.987418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:19:22.222 [2024-11-15 11:01:10.987427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:19:22.222 [2024-11-15 11:01:10.996204] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:22.222 [2024-11-15 11:01:10.996270] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:22.222 [2024-11-15 11:01:10.996292] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:19:22.222 [2024-11-15 11:01:11.002798] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:22.222 [2024-11-15 11:01:11.002821] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:22.222 [2024-11-15 11:01:11.002834] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170e5300 00:19:22.222 [2024-11-15 11:01:11.002904] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:22.222 [2024-11-15 11:01:11.002916] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:22.222 [2024-11-15 11:01:11.002924] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170d9c80 00:19:22.222 [2024-11-15 11:01:11.002997] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:22.222 [2024-11-15 11:01:11.003008] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:22.222 [2024-11-15 11:01:11.003016] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170d2900 00:19:22.222 [2024-11-15 11:01:11.003072] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:22.222 [2024-11-15 11:01:11.003083] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:22.222 [2024-11-15 11:01:11.003091] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170c6340 00:19:22.222 [2024-11-15 11:01:11.003868] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:22.222 [2024-11-15 11:01:11.003884] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:22.222 [2024-11-15 11:01:11.003891] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001708e080 00:19:22.222 [2024-11-15 11:01:11.003956] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:22.222 [2024-11-15 11:01:11.003968] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:22.222 [2024-11-15 11:01:11.003975] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170bf1c0 00:19:22.222 [2024-11-15 11:01:11.004061] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:22.222 [2024-11-15 11:01:11.004072] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:22.222 [2024-11-15 11:01:11.004079] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001709b1c0 00:19:22.222 [2024-11-15 11:01:11.004138] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:22.222 [2024-11-15 11:01:11.004149] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:22.222 [2024-11-15 11:01:11.004156] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170a8500 00:19:22.222 [2024-11-15 11:01:11.004226] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:22.222 [2024-11-15 11:01:11.004238] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:22.222 [2024-11-15 11:01:11.004246] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170c5040 00:19:22.482 11:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1477839 00:19:22.482 11:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:19:22.482 11:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1477839 00:19:22.482 11:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:19:22.482 11:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:22.482 11:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:19:22.482 11:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:22.482 11:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 1477839 00:19:23.418 [2024-11-15 11:01:12.000225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:23.418 [2024-11-15 11:01:12.000277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:23.418 [2024-11-15 11:01:12.000391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:23.418 [2024-11-15 11:01:12.000415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:23.418 [2024-11-15 11:01:12.000437] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:19:23.418 [2024-11-15 11:01:12.000463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:23.418 [2024-11-15 11:01:12.006605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:23.418 [2024-11-15 11:01:12.006645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:19:23.418 [2024-11-15 11:01:12.008070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:23.418 [2024-11-15 11:01:12.008102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:19:23.418 [2024-11-15 11:01:12.009379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:23.418 [2024-11-15 11:01:12.009394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:19:23.418 [2024-11-15 11:01:12.010417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:23.418 [2024-11-15 11:01:12.010449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:19:23.418 [2024-11-15 11:01:12.011813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:23.418 [2024-11-15 11:01:12.011845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:19:23.418 [2024-11-15 11:01:12.012993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:23.418 [2024-11-15 11:01:12.013023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:19:23.418 [2024-11-15 11:01:12.014057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:23.418 [2024-11-15 11:01:12.014088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:19:23.418 [2024-11-15 11:01:12.015383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:23.418 [2024-11-15 11:01:12.015415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:19:23.418 [2024-11-15 11:01:12.016696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:23.418 [2024-11-15 11:01:12.016737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:19:23.418 [2024-11-15 11:01:12.016757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:19:23.418 [2024-11-15 11:01:12.016777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:19:23.418 [2024-11-15 11:01:12.016798] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state 00:19:23.418 [2024-11-15 11:01:12.016822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:19:23.418 [2024-11-15 11:01:12.016851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:19:23.418 [2024-11-15 11:01:12.016872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:19:23.418 [2024-11-15 11:01:12.016891] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state 00:19:23.418 [2024-11-15 11:01:12.016912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:19:23.418 [2024-11-15 11:01:12.016947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:19:23.418 [2024-11-15 11:01:12.016956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:19:23.418 [2024-11-15 11:01:12.016964] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state 00:19:23.418 [2024-11-15 11:01:12.016973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:19:23.418 [2024-11-15 11:01:12.016985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:19:23.418 [2024-11-15 11:01:12.016994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:19:23.418 [2024-11-15 11:01:12.017003] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state 00:19:23.418 [2024-11-15 11:01:12.017012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:19:23.418 [2024-11-15 11:01:12.017119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:19:23.418 [2024-11-15 11:01:12.017131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:19:23.418 [2024-11-15 11:01:12.017141] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state 00:19:23.418 [2024-11-15 11:01:12.017150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:19:23.418 [2024-11-15 11:01:12.017169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:19:23.418 [2024-11-15 11:01:12.017178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:19:23.418 [2024-11-15 11:01:12.017187] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state 00:19:23.419 [2024-11-15 11:01:12.017196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:19:23.419 [2024-11-15 11:01:12.017208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:19:23.419 [2024-11-15 11:01:12.017217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:19:23.419 [2024-11-15 11:01:12.017225] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state 00:19:23.419 [2024-11-15 11:01:12.017234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:19:23.419 [2024-11-15 11:01:12.017250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:19:23.419 [2024-11-15 11:01:12.017259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:19:23.419 [2024-11-15 11:01:12.017267] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state 00:19:23.419 [2024-11-15 11:01:12.017277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:19:23.419 [2024-11-15 11:01:12.017288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:19:23.419 [2024-11-15 11:01:12.017297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:19:23.419 [2024-11-15 11:01:12.017306] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state 00:19:23.419 [2024-11-15 11:01:12.017315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:23.419 rmmod nvme_rdma 00:19:23.419 rmmod nvme_fabrics 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1477546 ']' 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1477546 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 1477546 ']' 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 1477546 00:19:23.419 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1477546) - No such process 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 1477546 is not found' 00:19:23.419 Process with pid 1477546 is not found 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:23.419 00:19:23.419 real 0m5.469s 00:19:23.419 user 0m16.014s 00:19:23.419 sys 0m1.147s 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:23.419 ************************************ 00:19:23.419 END TEST nvmf_shutdown_tc3 00:19:23.419 ************************************ 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:23.419 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:23.678 ************************************ 00:19:23.678 START TEST nvmf_shutdown_tc4 00:19:23.678 ************************************ 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:19:23.678 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:19:23.679 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:19:23.679 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:19:23.679 Found net devices under 0000:af:00.0: mlx_0_0 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:19:23.679 Found net devices under 0000:af:00.1: mlx_0_1 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # rdma_device_init 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:23.679 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:23.679 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:23.679 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:19:23.679 altname enp175s0f0np0 00:19:23.679 altname ens801f0np0 00:19:23.679 inet 192.168.100.8/24 scope global mlx_0_0 00:19:23.679 valid_lft forever preferred_lft forever 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:23.680 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:23.680 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:19:23.680 altname enp175s0f1np1 00:19:23.680 altname ens801f1np1 00:19:23.680 inet 192.168.100.9/24 scope global mlx_0_1 00:19:23.680 valid_lft forever preferred_lft forever 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:23.680 192.168.100.9' 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:23.680 192.168.100.9' 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # head -n 1 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:23.680 192.168.100.9' 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # tail -n +2 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # head -n 1 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1478657 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1478657 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 1478657 ']' 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:23.680 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:23.939 [2024-11-15 11:01:12.602628] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:19:23.939 [2024-11-15 11:01:12.602684] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.939 [2024-11-15 11:01:12.666763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:23.939 [2024-11-15 11:01:12.709001] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.939 [2024-11-15 11:01:12.709040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.939 [2024-11-15 11:01:12.709048] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.939 [2024-11-15 11:01:12.709053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.939 [2024-11-15 11:01:12.709058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.939 [2024-11-15 11:01:12.710762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.939 [2024-11-15 11:01:12.710831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:23.939 [2024-11-15 11:01:12.710932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.939 [2024-11-15 11:01:12.710932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:23.939 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:23.939 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:19:23.939 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:23.940 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:23.940 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:24.199 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.199 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:24.199 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.199 11:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:24.199 [2024-11-15 11:01:12.876459] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2261530/0x2265a20) succeed. 00:19:24.199 [2024-11-15 11:01:12.885782] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2262bc0/0x22a70c0) succeed. 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.199 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:24.199 Malloc1 00:19:24.458 [2024-11-15 11:01:13.104491] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:24.458 Malloc2 00:19:24.458 Malloc3 00:19:24.458 Malloc4 00:19:24.458 Malloc5 00:19:24.458 Malloc6 00:19:24.717 Malloc7 00:19:24.717 Malloc8 00:19:24.717 Malloc9 00:19:24.717 Malloc10 00:19:24.717 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.717 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:19:24.717 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:24.717 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:24.717 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1478927 00:19:24.717 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:19:24.717 11:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:19:24.975 [2024-11-15 11:01:13.625276] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:30.349 11:01:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:30.349 11:01:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1478657 00:19:30.349 11:01:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 1478657 ']' 00:19:30.349 11:01:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 1478657 00:19:30.349 11:01:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:19:30.349 11:01:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:30.349 11:01:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1478657 00:19:30.349 11:01:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:30.349 11:01:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:30.349 11:01:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1478657' 00:19:30.349 killing process with pid 1478657 00:19:30.349 11:01:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 1478657 00:19:30.349 11:01:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 1478657 00:19:30.349 NVMe io qpair process completion error 00:19:30.349 NVMe io qpair process completion error 00:19:30.349 NVMe io qpair process completion error 00:19:30.349 NVMe io qpair process completion error 00:19:30.349 11:01:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:19:30.977 [2024-11-15 11:01:19.687368] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Submitting Keep Alive failed 00:19:30.977 [2024-11-15 11:01:19.687504] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Submitting Keep Alive failed 00:19:30.977 NVMe io qpair process completion error 00:19:30.977 [2024-11-15 11:01:19.690864] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:19:30.977 [2024-11-15 11:01:19.690909] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Submitting Keep Alive failed 00:19:30.977 NVMe io qpair process completion error 00:19:30.977 NVMe io qpair process completion error 00:19:30.977 NVMe io qpair process completion error 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.977 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.978 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 NVMe io qpair process completion error 00:19:30.979 NVMe io qpair process completion error 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.979 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 Write completed with error (sct=0, sc=8) 00:19:30.980 NVMe io qpair process completion error 00:19:31.547 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1478927 00:19:31.547 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:19:31.547 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1478927 00:19:31.547 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:19:31.547 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:31.547 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:19:31.547 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:31.547 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 1478927 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 [2024-11-15 11:01:20.694212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:32.118 [2024-11-15 11:01:20.694278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 [2024-11-15 11:01:20.696012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 [2024-11-15 11:01:20.696051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 [2024-11-15 11:01:20.697530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 [2024-11-15 11:01:20.697566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.118 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 [2024-11-15 11:01:20.704433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 [2024-11-15 11:01:20.704468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 [2024-11-15 11:01:20.712605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 [2024-11-15 11:01:20.712673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.119 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 [2024-11-15 11:01:20.722323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:32.120 [2024-11-15 11:01:20.722381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 [2024-11-15 11:01:20.723969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 [2024-11-15 11:01:20.724007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 [2024-11-15 11:01:20.726021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:32.120 [2024-11-15 11:01:20.726055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.120 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 [2024-11-15 11:01:20.732039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 [2024-11-15 11:01:20.732095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 [2024-11-15 11:01:20.740730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:32.121 [2024-11-15 11:01:20.740785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.121 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Write completed with error (sct=0, sc=8) 00:19:32.122 Initializing NVMe Controllers 00:19:32.122 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:19:32.122 Controller IO queue size 128, less than required. 00:19:32.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:32.122 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:19:32.122 Controller IO queue size 128, less than required. 00:19:32.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:32.122 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:19:32.122 Controller IO queue size 128, less than required. 00:19:32.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:32.122 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:19:32.122 Controller IO queue size 128, less than required. 00:19:32.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:32.122 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:19:32.122 Controller IO queue size 128, less than required. 00:19:32.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:32.122 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:19:32.122 Controller IO queue size 128, less than required. 00:19:32.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:32.122 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:32.122 Controller IO queue size 128, less than required. 00:19:32.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:32.122 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:19:32.122 Controller IO queue size 128, less than required. 00:19:32.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:32.122 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:19:32.122 Controller IO queue size 128, less than required. 00:19:32.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:32.122 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:19:32.122 Controller IO queue size 128, less than required. 00:19:32.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:32.122 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:19:32.122 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:19:32.122 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:19:32.122 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:19:32.122 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:19:32.122 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:19:32.122 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:32.122 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:19:32.122 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:19:32.122 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:19:32.122 Initialization complete. Launching workers. 00:19:32.122 ======================================================== 00:19:32.122 Latency(us) 00:19:32.122 Device Information : IOPS MiB/s Average min max 00:19:32.122 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1443.57 62.03 89228.74 22909.25 1220878.79 00:19:32.122 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1452.22 62.40 88754.75 6430.39 1220418.27 00:19:32.122 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1478.18 63.52 101792.02 139.94 2196321.94 00:19:32.122 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1487.69 63.92 100497.30 130.53 2140882.86 00:19:32.122 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1458.50 62.67 102571.48 129.08 2168060.83 00:19:32.122 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1458.16 62.66 102653.24 181.38 2181603.81 00:19:32.122 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1422.02 61.10 89795.70 41239.03 1215079.96 00:19:32.122 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1423.03 61.15 89808.75 43335.82 1206690.67 00:19:32.122 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1487.52 63.92 100589.24 131.16 2122615.68 00:19:32.122 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1504.32 64.64 99531.57 117.57 2028786.41 00:19:32.122 ======================================================== 00:19:32.122 Total : 14615.20 628.00 96598.55 117.57 2196321.94 00:19:32.122 00:19:32.122 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:19:32.122 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:19:32.122 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:32.122 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:32.122 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:32.122 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:19:32.122 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:19:32.122 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:32.122 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:32.122 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:19:32.122 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:32.122 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:19:32.122 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:32.122 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:32.122 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:19:32.122 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:32.122 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:32.122 rmmod nvme_rdma 00:19:32.122 rmmod nvme_fabrics 00:19:32.122 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:32.123 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:19:32.123 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:19:32.123 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1478657 ']' 00:19:32.123 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1478657 00:19:32.123 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 1478657 ']' 00:19:32.123 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 1478657 00:19:32.123 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1478657) - No such process 00:19:32.123 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 1478657 is not found' 00:19:32.123 Process with pid 1478657 is not found 00:19:32.123 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:32.123 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:32.123 00:19:32.123 real 0m8.522s 00:19:32.123 user 0m32.237s 00:19:32.123 sys 0m1.065s 00:19:32.123 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:32.123 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:32.123 ************************************ 00:19:32.123 END TEST nvmf_shutdown_tc4 00:19:32.123 ************************************ 00:19:32.123 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:19:32.123 00:19:32.123 real 0m31.010s 00:19:32.123 user 1m36.413s 00:19:32.123 sys 0m8.573s 00:19:32.123 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:32.123 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:32.123 ************************************ 00:19:32.123 END TEST nvmf_shutdown 00:19:32.123 ************************************ 00:19:32.123 11:01:20 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:19:32.123 11:01:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:32.123 11:01:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:32.123 11:01:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:32.123 ************************************ 00:19:32.123 START TEST nvmf_nsid 00:19:32.123 ************************************ 00:19:32.123 11:01:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:19:32.383 * Looking for test storage... 00:19:32.383 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:32.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.383 --rc genhtml_branch_coverage=1 00:19:32.383 --rc genhtml_function_coverage=1 00:19:32.383 --rc genhtml_legend=1 00:19:32.383 --rc geninfo_all_blocks=1 00:19:32.383 --rc geninfo_unexecuted_blocks=1 00:19:32.383 00:19:32.383 ' 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:32.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.383 --rc genhtml_branch_coverage=1 00:19:32.383 --rc genhtml_function_coverage=1 00:19:32.383 --rc genhtml_legend=1 00:19:32.383 --rc geninfo_all_blocks=1 00:19:32.383 --rc geninfo_unexecuted_blocks=1 00:19:32.383 00:19:32.383 ' 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:32.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.383 --rc genhtml_branch_coverage=1 00:19:32.383 --rc genhtml_function_coverage=1 00:19:32.383 --rc genhtml_legend=1 00:19:32.383 --rc geninfo_all_blocks=1 00:19:32.383 --rc geninfo_unexecuted_blocks=1 00:19:32.383 00:19:32.383 ' 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:32.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.383 --rc genhtml_branch_coverage=1 00:19:32.383 --rc genhtml_function_coverage=1 00:19:32.383 --rc genhtml_legend=1 00:19:32.383 --rc geninfo_all_blocks=1 00:19:32.383 --rc geninfo_unexecuted_blocks=1 00:19:32.383 00:19:32.383 ' 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:32.383 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:32.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:19:32.384 11:01:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:37.653 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:19:37.654 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:19:37.654 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:19:37.654 Found net devices under 0000:af:00.0: mlx_0_0 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:19:37.654 Found net devices under 0000:af:00.1: mlx_0_1 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@448 -- # rdma_device_init 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # uname 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:37.654 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:37.654 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:19:37.654 altname enp175s0f0np0 00:19:37.654 altname ens801f0np0 00:19:37.654 inet 192.168.100.8/24 scope global mlx_0_0 00:19:37.654 valid_lft forever preferred_lft forever 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:37.654 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:37.654 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:19:37.654 altname enp175s0f1np1 00:19:37.654 altname ens801f1np1 00:19:37.654 inet 192.168.100.9/24 scope global mlx_0_1 00:19:37.654 valid_lft forever preferred_lft forever 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:37.654 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:37.913 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:37.913 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:37.913 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:37.913 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:37.913 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:37.913 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:19:37.913 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:37.913 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:37.913 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:37.913 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:37.913 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:37.913 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:37.913 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:19:37.913 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:37.913 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:37.913 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:37.914 192.168.100.9' 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:37.914 192.168.100.9' 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # head -n 1 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # head -n 1 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:37.914 192.168.100.9' 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # tail -n +2 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1483229 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1483229 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 1483229 ']' 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:37.914 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:37.914 [2024-11-15 11:01:26.670142] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:19:37.914 [2024-11-15 11:01:26.670197] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.914 [2024-11-15 11:01:26.731080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.914 [2024-11-15 11:01:26.769341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.914 [2024-11-15 11:01:26.769373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.914 [2024-11-15 11:01:26.769380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.914 [2024-11-15 11:01:26.769385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.914 [2024-11-15 11:01:26.769390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.914 [2024-11-15 11:01:26.769964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1483253 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=192.168.100.8 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=192.168.100.8 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=d6cb9e5c-2381-4b01-b516-e88f071aa0f4 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=e638c32d-cb02-4d7d-adea-785c83cbde80 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=2312861d-9937-4c29-b200-20e09a7300d5 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.174 11:01:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:38.174 null0 00:19:38.174 null1 00:19:38.174 [2024-11-15 11:01:26.959226] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:19:38.174 [2024-11-15 11:01:26.959268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1483253 ] 00:19:38.174 null2 00:19:38.174 [2024-11-15 11:01:26.986935] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b24ac0/0x1b352c0) succeed. 00:19:38.174 [2024-11-15 11:01:26.995855] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b25f70/0x1bb5300) succeed. 00:19:38.174 [2024-11-15 11:01:27.022351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.174 [2024-11-15 11:01:27.046955] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:38.433 [2024-11-15 11:01:27.068011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.433 11:01:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.433 11:01:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1483253 /var/tmp/tgt2.sock 00:19:38.433 11:01:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 1483253 ']' 00:19:38.433 11:01:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:19:38.433 11:01:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:38.433 11:01:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:19:38.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:19:38.433 11:01:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:38.433 11:01:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:38.433 11:01:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:38.433 11:01:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:19:38.433 11:01:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:19:39.000 [2024-11-15 11:01:27.610091] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdc9740/0xd391b0) succeed. 00:19:39.000 [2024-11-15 11:01:27.620534] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf259f0/0xd7a850) succeed. 00:19:39.000 [2024-11-15 11:01:27.663118] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:39.000 nvme0n1 nvme0n2 00:19:39.000 nvme1n1 00:19:39.000 11:01:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:19:39.000 11:01:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:19:39.000 11:01:27 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t rdma -a 192.168.100.8 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid d6cb9e5c-2381-4b01-b516-e88f071aa0f4 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d6cb9e5c23814b01b516e88f071aa0f4 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D6CB9E5C23814B01B516E88F071AA0F4 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ D6CB9E5C23814B01B516E88F071AA0F4 == \D\6\C\B\9\E\5\C\2\3\8\1\4\B\0\1\B\5\1\6\E\8\8\F\0\7\1\A\A\0\F\4 ]] 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:20:00.927 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid e638c32d-cb02-4d7d-adea-785c83cbde80 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e638c32dcb024d7dadea785c83cbde80 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E638C32DCB024D7DADEA785C83CBDE80 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ E638C32DCB024D7DADEA785C83CBDE80 == \E\6\3\8\C\3\2\D\C\B\0\2\4\D\7\D\A\D\E\A\7\8\5\C\8\3\C\B\D\E\8\0 ]] 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 2312861d-9937-4c29-b200-20e09a7300d5 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2312861d99374c29b20020e09a7300d5 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2312861D99374C29B20020E09A7300D5 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 2312861D99374C29B20020E09A7300D5 == \2\3\1\2\8\6\1\D\9\9\3\7\4\C\2\9\B\2\0\0\2\0\E\0\9\A\7\3\0\0\D\5 ]] 00:20:00.928 11:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:20:09.034 [2024-11-15 11:01:57.806699] ctrlr.c: 180:nvmf_ctrlr_keep_alive_poll: *NOTICE*: Disconnecting host nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 from subsystem nqn.2024-10.io.spdk:cnode2 due to keep alive timeout. 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1483253 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 1483253 ']' 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 1483253 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1483253 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1483253' 00:21:16.718 killing process with pid 1483253 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 1483253 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 1483253 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:16.718 rmmod nvme_rdma 00:21:16.718 rmmod nvme_fabrics 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1483229 ']' 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1483229 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 1483229 ']' 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 1483229 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1483229 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1483229' 00:21:16.718 killing process with pid 1483229 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 1483229 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 1483229 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:16.718 00:21:16.718 real 1m42.028s 00:21:16.718 user 3m12.970s 00:21:16.718 sys 0m5.886s 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:16.718 11:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:16.718 ************************************ 00:21:16.718 END TEST nvmf_nsid 00:21:16.718 ************************************ 00:21:16.718 11:03:03 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:16.718 00:21:16.718 real 10m12.858s 00:21:16.718 user 26m2.717s 00:21:16.718 sys 1m54.390s 00:21:16.718 11:03:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:16.718 11:03:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:16.718 ************************************ 00:21:16.718 END TEST nvmf_target_extra 00:21:16.718 ************************************ 00:21:16.718 11:03:03 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:21:16.718 11:03:03 nvmf_rdma -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:16.718 11:03:03 nvmf_rdma -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:16.718 11:03:03 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:16.718 ************************************ 00:21:16.718 START TEST nvmf_host 00:21:16.718 ************************************ 00:21:16.718 11:03:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:21:16.718 * Looking for test storage... 00:21:16.718 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:16.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.719 --rc genhtml_branch_coverage=1 00:21:16.719 --rc genhtml_function_coverage=1 00:21:16.719 --rc genhtml_legend=1 00:21:16.719 --rc geninfo_all_blocks=1 00:21:16.719 --rc geninfo_unexecuted_blocks=1 00:21:16.719 00:21:16.719 ' 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:16.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.719 --rc genhtml_branch_coverage=1 00:21:16.719 --rc genhtml_function_coverage=1 00:21:16.719 --rc genhtml_legend=1 00:21:16.719 --rc geninfo_all_blocks=1 00:21:16.719 --rc geninfo_unexecuted_blocks=1 00:21:16.719 00:21:16.719 ' 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:16.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.719 --rc genhtml_branch_coverage=1 00:21:16.719 --rc genhtml_function_coverage=1 00:21:16.719 --rc genhtml_legend=1 00:21:16.719 --rc geninfo_all_blocks=1 00:21:16.719 --rc geninfo_unexecuted_blocks=1 00:21:16.719 00:21:16.719 ' 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:16.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.719 --rc genhtml_branch_coverage=1 00:21:16.719 --rc genhtml_function_coverage=1 00:21:16.719 --rc genhtml_legend=1 00:21:16.719 --rc geninfo_all_blocks=1 00:21:16.719 --rc geninfo_unexecuted_blocks=1 00:21:16.719 00:21:16.719 ' 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:16.719 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.719 ************************************ 00:21:16.719 START TEST nvmf_multicontroller 00:21:16.719 ************************************ 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:21:16.719 * Looking for test storage... 00:21:16.719 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:16.719 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:16.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.720 --rc genhtml_branch_coverage=1 00:21:16.720 --rc genhtml_function_coverage=1 00:21:16.720 --rc genhtml_legend=1 00:21:16.720 --rc geninfo_all_blocks=1 00:21:16.720 --rc geninfo_unexecuted_blocks=1 00:21:16.720 00:21:16.720 ' 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:16.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.720 --rc genhtml_branch_coverage=1 00:21:16.720 --rc genhtml_function_coverage=1 00:21:16.720 --rc genhtml_legend=1 00:21:16.720 --rc geninfo_all_blocks=1 00:21:16.720 --rc geninfo_unexecuted_blocks=1 00:21:16.720 00:21:16.720 ' 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:16.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.720 --rc genhtml_branch_coverage=1 00:21:16.720 --rc genhtml_function_coverage=1 00:21:16.720 --rc genhtml_legend=1 00:21:16.720 --rc geninfo_all_blocks=1 00:21:16.720 --rc geninfo_unexecuted_blocks=1 00:21:16.720 00:21:16.720 ' 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:16.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.720 --rc genhtml_branch_coverage=1 00:21:16.720 --rc genhtml_function_coverage=1 00:21:16.720 --rc genhtml_legend=1 00:21:16.720 --rc geninfo_all_blocks=1 00:21:16.720 --rc geninfo_unexecuted_blocks=1 00:21:16.720 00:21:16.720 ' 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:16.720 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:21:16.720 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:21:16.721 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:21:16.721 00:21:16.721 real 0m0.199s 00:21:16.721 user 0m0.132s 00:21:16.721 sys 0m0.082s 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.721 ************************************ 00:21:16.721 END TEST nvmf_multicontroller 00:21:16.721 ************************************ 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.721 ************************************ 00:21:16.721 START TEST nvmf_aer 00:21:16.721 ************************************ 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:21:16.721 * Looking for test storage... 00:21:16.721 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:16.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.721 --rc genhtml_branch_coverage=1 00:21:16.721 --rc genhtml_function_coverage=1 00:21:16.721 --rc genhtml_legend=1 00:21:16.721 --rc geninfo_all_blocks=1 00:21:16.721 --rc geninfo_unexecuted_blocks=1 00:21:16.721 00:21:16.721 ' 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:16.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.721 --rc genhtml_branch_coverage=1 00:21:16.721 --rc genhtml_function_coverage=1 00:21:16.721 --rc genhtml_legend=1 00:21:16.721 --rc geninfo_all_blocks=1 00:21:16.721 --rc geninfo_unexecuted_blocks=1 00:21:16.721 00:21:16.721 ' 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:16.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.721 --rc genhtml_branch_coverage=1 00:21:16.721 --rc genhtml_function_coverage=1 00:21:16.721 --rc genhtml_legend=1 00:21:16.721 --rc geninfo_all_blocks=1 00:21:16.721 --rc geninfo_unexecuted_blocks=1 00:21:16.721 00:21:16.721 ' 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:16.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.721 --rc genhtml_branch_coverage=1 00:21:16.721 --rc genhtml_function_coverage=1 00:21:16.721 --rc genhtml_legend=1 00:21:16.721 --rc geninfo_all_blocks=1 00:21:16.721 --rc geninfo_unexecuted_blocks=1 00:21:16.721 00:21:16.721 ' 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.721 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:16.722 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:16.722 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:16.722 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:16.722 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:16.722 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:16.722 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:16.722 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.722 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:16.722 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:16.722 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:16.722 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.722 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.722 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.722 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:16.722 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:16.722 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:16.722 11:03:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:21:20.913 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:21:20.913 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:21:20.913 Found net devices under 0000:af:00.0: mlx_0_0 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:21:20.913 Found net devices under 0000:af:00.1: mlx_0_1 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # rdma_device_init 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:20.913 11:03:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:20.913 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:20.914 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:20.914 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:21:20.914 altname enp175s0f0np0 00:21:20.914 altname ens801f0np0 00:21:20.914 inet 192.168.100.8/24 scope global mlx_0_0 00:21:20.914 valid_lft forever preferred_lft forever 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:20.914 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:20.914 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:21:20.914 altname enp175s0f1np1 00:21:20.914 altname ens801f1np1 00:21:20.914 inet 192.168.100.9/24 scope global mlx_0_1 00:21:20.914 valid_lft forever preferred_lft forever 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:20.914 192.168.100.9' 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:20.914 192.168.100.9' 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # head -n 1 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # head -n 1 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:20.914 192.168.100.9' 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # tail -n +2 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1493076 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1493076 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 1493076 ']' 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:20.914 [2024-11-15 11:03:09.223345] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:21:20.914 [2024-11-15 11:03:09.223391] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.914 [2024-11-15 11:03:09.285511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:20.914 [2024-11-15 11:03:09.330901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.914 [2024-11-15 11:03:09.330937] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.914 [2024-11-15 11:03:09.330944] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.914 [2024-11-15 11:03:09.330950] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.914 [2024-11-15 11:03:09.330955] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.914 [2024-11-15 11:03:09.332617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.914 [2024-11-15 11:03:09.332716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.914 [2024-11-15 11:03:09.332735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:20.914 [2024-11-15 11:03:09.332737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:20.914 [2024-11-15 11:03:09.503252] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22c4230/0x22c8720) succeed. 00:21:20.914 [2024-11-15 11:03:09.512609] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22c58c0/0x2309dc0) succeed. 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:20.914 Malloc0 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.914 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:20.915 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.915 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:20.915 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.915 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:20.915 [2024-11-15 11:03:09.696754] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:20.915 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.915 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:20.915 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.915 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:20.915 [ 00:21:20.915 { 00:21:20.915 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:20.915 "subtype": "Discovery", 00:21:20.915 "listen_addresses": [], 00:21:20.915 "allow_any_host": true, 00:21:20.915 "hosts": [] 00:21:20.915 }, 00:21:20.915 { 00:21:20.915 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.915 "subtype": "NVMe", 00:21:20.915 "listen_addresses": [ 00:21:20.915 { 00:21:20.915 "trtype": "RDMA", 00:21:20.915 "adrfam": "IPv4", 00:21:20.915 "traddr": "192.168.100.8", 00:21:20.915 "trsvcid": "4420" 00:21:20.915 } 00:21:20.915 ], 00:21:20.915 "allow_any_host": true, 00:21:20.915 "hosts": [], 00:21:20.915 "serial_number": "SPDK00000000000001", 00:21:20.915 "model_number": "SPDK bdev Controller", 00:21:20.915 "max_namespaces": 2, 00:21:20.915 "min_cntlid": 1, 00:21:20.915 "max_cntlid": 65519, 00:21:20.915 "namespaces": [ 00:21:20.915 { 00:21:20.915 "nsid": 1, 00:21:20.915 "bdev_name": "Malloc0", 00:21:20.915 "name": "Malloc0", 00:21:20.915 "nguid": "370E81F1DC0242F9B28B1A9A95BDB1D4", 00:21:20.915 "uuid": "370e81f1-dc02-42f9-b28b-1a9a95bdb1d4" 00:21:20.915 } 00:21:20.915 ] 00:21:20.915 } 00:21:20.915 ] 00:21:20.915 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.915 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:20.915 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:20.915 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1493349 00:21:20.915 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:20.915 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:20.915 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:21:20.915 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:20.915 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:21:20.915 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:21:20.915 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:21:21.174 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:21.174 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:21:21.174 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:21:21.174 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:21:21.174 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:21.174 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:21.174 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:21:21.174 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:21.174 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.174 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:21.174 Malloc1 00:21:21.174 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.174 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:21.174 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.174 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:21.174 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.174 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:21.174 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.174 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:21.174 [ 00:21:21.174 { 00:21:21.174 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:21.174 "subtype": "Discovery", 00:21:21.174 "listen_addresses": [], 00:21:21.174 "allow_any_host": true, 00:21:21.174 "hosts": [] 00:21:21.174 }, 00:21:21.174 { 00:21:21.174 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.174 "subtype": "NVMe", 00:21:21.174 "listen_addresses": [ 00:21:21.174 { 00:21:21.174 "trtype": "RDMA", 00:21:21.174 "adrfam": "IPv4", 00:21:21.174 "traddr": "192.168.100.8", 00:21:21.174 "trsvcid": "4420" 00:21:21.174 } 00:21:21.174 ], 00:21:21.174 "allow_any_host": true, 00:21:21.174 "hosts": [], 00:21:21.174 "serial_number": "SPDK00000000000001", 00:21:21.174 "model_number": "SPDK bdev Controller", 00:21:21.174 "max_namespaces": 2, 00:21:21.174 "min_cntlid": 1, 00:21:21.174 "max_cntlid": 65519, 00:21:21.174 "namespaces": [ 00:21:21.174 { 00:21:21.174 "nsid": 1, 00:21:21.174 "bdev_name": "Malloc0", 00:21:21.174 "name": "Malloc0", 00:21:21.174 "nguid": "370E81F1DC0242F9B28B1A9A95BDB1D4", 00:21:21.174 "uuid": "370e81f1-dc02-42f9-b28b-1a9a95bdb1d4" 00:21:21.174 }, 00:21:21.174 { 00:21:21.174 "nsid": 2, 00:21:21.174 "bdev_name": "Malloc1", 00:21:21.174 "name": "Malloc1", 00:21:21.174 "nguid": "DC699F1A1A5F4D6D81CAC447E9A337F6", 00:21:21.174 "uuid": "dc699f1a-1a5f-4d6d-81ca-c447e9a337f6" 00:21:21.174 } 00:21:21.174 ] 00:21:21.174 } 00:21:21.174 ] 00:21:21.174 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.174 11:03:09 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1493349 00:21:21.174 Asynchronous Event Request test 00:21:21.174 Attaching to 192.168.100.8 00:21:21.174 Attached to 192.168.100.8 00:21:21.174 Registering asynchronous event callbacks... 00:21:21.174 Starting namespace attribute notice tests for all controllers... 00:21:21.174 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:21.174 aer_cb - Changed Namespace 00:21:21.174 Cleaning up... 00:21:21.174 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:21.174 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.174 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:21.174 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.174 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:21.433 rmmod nvme_rdma 00:21:21.433 rmmod nvme_fabrics 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1493076 ']' 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1493076 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 1493076 ']' 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 1493076 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1493076 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1493076' 00:21:21.433 killing process with pid 1493076 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 1493076 00:21:21.433 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 1493076 00:21:21.692 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:21.692 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:21.692 00:21:21.692 real 0m6.852s 00:21:21.692 user 0m5.861s 00:21:21.692 sys 0m4.500s 00:21:21.692 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:21.692 11:03:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:21.692 ************************************ 00:21:21.692 END TEST nvmf_aer 00:21:21.692 ************************************ 00:21:21.692 11:03:10 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:21:21.692 11:03:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:21.692 11:03:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:21.692 11:03:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.692 ************************************ 00:21:21.692 START TEST nvmf_async_init 00:21:21.692 ************************************ 00:21:21.692 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:21:21.692 * Looking for test storage... 00:21:21.952 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:21.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.952 --rc genhtml_branch_coverage=1 00:21:21.952 --rc genhtml_function_coverage=1 00:21:21.952 --rc genhtml_legend=1 00:21:21.952 --rc geninfo_all_blocks=1 00:21:21.952 --rc geninfo_unexecuted_blocks=1 00:21:21.952 00:21:21.952 ' 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:21.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.952 --rc genhtml_branch_coverage=1 00:21:21.952 --rc genhtml_function_coverage=1 00:21:21.952 --rc genhtml_legend=1 00:21:21.952 --rc geninfo_all_blocks=1 00:21:21.952 --rc geninfo_unexecuted_blocks=1 00:21:21.952 00:21:21.952 ' 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:21.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.952 --rc genhtml_branch_coverage=1 00:21:21.952 --rc genhtml_function_coverage=1 00:21:21.952 --rc genhtml_legend=1 00:21:21.952 --rc geninfo_all_blocks=1 00:21:21.952 --rc geninfo_unexecuted_blocks=1 00:21:21.952 00:21:21.952 ' 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:21.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.952 --rc genhtml_branch_coverage=1 00:21:21.952 --rc genhtml_function_coverage=1 00:21:21.952 --rc genhtml_legend=1 00:21:21.952 --rc geninfo_all_blocks=1 00:21:21.952 --rc geninfo_unexecuted_blocks=1 00:21:21.952 00:21:21.952 ' 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.952 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:21.953 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=665f9ccf1d4a485aa531321e66eac60c 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:21.953 11:03:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:21:27.222 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:21:27.222 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:21:27.223 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:21:27.223 Found net devices under 0000:af:00.0: mlx_0_0 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:21:27.223 Found net devices under 0000:af:00.1: mlx_0_1 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # rdma_device_init 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:27.223 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:27.223 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:21:27.223 altname enp175s0f0np0 00:21:27.223 altname ens801f0np0 00:21:27.223 inet 192.168.100.8/24 scope global mlx_0_0 00:21:27.223 valid_lft forever preferred_lft forever 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:27.223 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:27.223 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:21:27.223 altname enp175s0f1np1 00:21:27.223 altname ens801f1np1 00:21:27.223 inet 192.168.100.9/24 scope global mlx_0_1 00:21:27.223 valid_lft forever preferred_lft forever 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:27.223 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:27.224 192.168.100.9' 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:27.224 192.168.100.9' 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # head -n 1 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:27.224 192.168.100.9' 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # tail -n +2 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # head -n 1 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1496545 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1496545 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 1496545 ']' 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:27.224 11:03:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:27.224 [2024-11-15 11:03:15.993315] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:21:27.224 [2024-11-15 11:03:15.993362] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.224 [2024-11-15 11:03:16.056286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.224 [2024-11-15 11:03:16.097624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.224 [2024-11-15 11:03:16.097660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.224 [2024-11-15 11:03:16.097667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.224 [2024-11-15 11:03:16.097673] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.224 [2024-11-15 11:03:16.097678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.224 [2024-11-15 11:03:16.098302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:27.483 [2024-11-15 11:03:16.252628] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf96fd0/0xf9b4c0) succeed. 00:21:27.483 [2024-11-15 11:03:16.261397] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf98480/0xfdcb60) succeed. 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:27.483 null0 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 665f9ccf1d4a485aa531321e66eac60c 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:27.483 [2024-11-15 11:03:16.326907] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.483 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:27.742 nvme0n1 00:21:27.742 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.742 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:27.742 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.742 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:27.742 [ 00:21:27.742 { 00:21:27.742 "name": "nvme0n1", 00:21:27.742 "aliases": [ 00:21:27.742 "665f9ccf-1d4a-485a-a531-321e66eac60c" 00:21:27.742 ], 00:21:27.742 "product_name": "NVMe disk", 00:21:27.742 "block_size": 512, 00:21:27.742 "num_blocks": 2097152, 00:21:27.743 "uuid": "665f9ccf-1d4a-485a-a531-321e66eac60c", 00:21:27.743 "numa_id": 1, 00:21:27.743 "assigned_rate_limits": { 00:21:27.743 "rw_ios_per_sec": 0, 00:21:27.743 "rw_mbytes_per_sec": 0, 00:21:27.743 "r_mbytes_per_sec": 0, 00:21:27.743 "w_mbytes_per_sec": 0 00:21:27.743 }, 00:21:27.743 "claimed": false, 00:21:27.743 "zoned": false, 00:21:27.743 "supported_io_types": { 00:21:27.743 "read": true, 00:21:27.743 "write": true, 00:21:27.743 "unmap": false, 00:21:27.743 "flush": true, 00:21:27.743 "reset": true, 00:21:27.743 "nvme_admin": true, 00:21:27.743 "nvme_io": true, 00:21:27.743 "nvme_io_md": false, 00:21:27.743 "write_zeroes": true, 00:21:27.743 "zcopy": false, 00:21:27.743 "get_zone_info": false, 00:21:27.743 "zone_management": false, 00:21:27.743 "zone_append": false, 00:21:27.743 "compare": true, 00:21:27.743 "compare_and_write": true, 00:21:27.743 "abort": true, 00:21:27.743 "seek_hole": false, 00:21:27.743 "seek_data": false, 00:21:27.743 "copy": true, 00:21:27.743 "nvme_iov_md": false 00:21:27.743 }, 00:21:27.743 "memory_domains": [ 00:21:27.743 { 00:21:27.743 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:21:27.743 "dma_device_type": 0 00:21:27.743 } 00:21:27.743 ], 00:21:27.743 "driver_specific": { 00:21:27.743 "nvme": [ 00:21:27.743 { 00:21:27.743 "trid": { 00:21:27.743 "trtype": "RDMA", 00:21:27.743 "adrfam": "IPv4", 00:21:27.743 "traddr": "192.168.100.8", 00:21:27.743 "trsvcid": "4420", 00:21:27.743 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:27.743 }, 00:21:27.743 "ctrlr_data": { 00:21:27.743 "cntlid": 1, 00:21:27.743 "vendor_id": "0x8086", 00:21:27.743 "model_number": "SPDK bdev Controller", 00:21:27.743 "serial_number": "00000000000000000000", 00:21:27.743 "firmware_revision": "25.01", 00:21:27.743 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:27.743 "oacs": { 00:21:27.743 "security": 0, 00:21:27.743 "format": 0, 00:21:27.743 "firmware": 0, 00:21:27.743 "ns_manage": 0 00:21:27.743 }, 00:21:27.743 "multi_ctrlr": true, 00:21:27.743 "ana_reporting": false 00:21:27.743 }, 00:21:27.743 "vs": { 00:21:27.743 "nvme_version": "1.3" 00:21:27.743 }, 00:21:27.743 "ns_data": { 00:21:27.743 "id": 1, 00:21:27.743 "can_share": true 00:21:27.743 } 00:21:27.743 } 00:21:27.743 ], 00:21:27.743 "mp_policy": "active_passive" 00:21:27.743 } 00:21:27.743 } 00:21:27.743 ] 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:27.743 [2024-11-15 11:03:16.431448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:27.743 [2024-11-15 11:03:16.454587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:21:27.743 [2024-11-15 11:03:16.479945] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:27.743 [ 00:21:27.743 { 00:21:27.743 "name": "nvme0n1", 00:21:27.743 "aliases": [ 00:21:27.743 "665f9ccf-1d4a-485a-a531-321e66eac60c" 00:21:27.743 ], 00:21:27.743 "product_name": "NVMe disk", 00:21:27.743 "block_size": 512, 00:21:27.743 "num_blocks": 2097152, 00:21:27.743 "uuid": "665f9ccf-1d4a-485a-a531-321e66eac60c", 00:21:27.743 "numa_id": 1, 00:21:27.743 "assigned_rate_limits": { 00:21:27.743 "rw_ios_per_sec": 0, 00:21:27.743 "rw_mbytes_per_sec": 0, 00:21:27.743 "r_mbytes_per_sec": 0, 00:21:27.743 "w_mbytes_per_sec": 0 00:21:27.743 }, 00:21:27.743 "claimed": false, 00:21:27.743 "zoned": false, 00:21:27.743 "supported_io_types": { 00:21:27.743 "read": true, 00:21:27.743 "write": true, 00:21:27.743 "unmap": false, 00:21:27.743 "flush": true, 00:21:27.743 "reset": true, 00:21:27.743 "nvme_admin": true, 00:21:27.743 "nvme_io": true, 00:21:27.743 "nvme_io_md": false, 00:21:27.743 "write_zeroes": true, 00:21:27.743 "zcopy": false, 00:21:27.743 "get_zone_info": false, 00:21:27.743 "zone_management": false, 00:21:27.743 "zone_append": false, 00:21:27.743 "compare": true, 00:21:27.743 "compare_and_write": true, 00:21:27.743 "abort": true, 00:21:27.743 "seek_hole": false, 00:21:27.743 "seek_data": false, 00:21:27.743 "copy": true, 00:21:27.743 "nvme_iov_md": false 00:21:27.743 }, 00:21:27.743 "memory_domains": [ 00:21:27.743 { 00:21:27.743 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:21:27.743 "dma_device_type": 0 00:21:27.743 } 00:21:27.743 ], 00:21:27.743 "driver_specific": { 00:21:27.743 "nvme": [ 00:21:27.743 { 00:21:27.743 "trid": { 00:21:27.743 "trtype": "RDMA", 00:21:27.743 "adrfam": "IPv4", 00:21:27.743 "traddr": "192.168.100.8", 00:21:27.743 "trsvcid": "4420", 00:21:27.743 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:27.743 }, 00:21:27.743 "ctrlr_data": { 00:21:27.743 "cntlid": 2, 00:21:27.743 "vendor_id": "0x8086", 00:21:27.743 "model_number": "SPDK bdev Controller", 00:21:27.743 "serial_number": "00000000000000000000", 00:21:27.743 "firmware_revision": "25.01", 00:21:27.743 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:27.743 "oacs": { 00:21:27.743 "security": 0, 00:21:27.743 "format": 0, 00:21:27.743 "firmware": 0, 00:21:27.743 "ns_manage": 0 00:21:27.743 }, 00:21:27.743 "multi_ctrlr": true, 00:21:27.743 "ana_reporting": false 00:21:27.743 }, 00:21:27.743 "vs": { 00:21:27.743 "nvme_version": "1.3" 00:21:27.743 }, 00:21:27.743 "ns_data": { 00:21:27.743 "id": 1, 00:21:27.743 "can_share": true 00:21:27.743 } 00:21:27.743 } 00:21:27.743 ], 00:21:27.743 "mp_policy": "active_passive" 00:21:27.743 } 00:21:27.743 } 00:21:27.743 ] 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.LrTBDkpvGp 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.LrTBDkpvGp 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.LrTBDkpvGp 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:27.743 [2024-11-15 11:03:16.546696] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.743 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:27.743 [2024-11-15 11:03:16.562738] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:28.001 nvme0n1 00:21:28.001 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.001 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:28.001 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.001 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:28.001 [ 00:21:28.001 { 00:21:28.001 "name": "nvme0n1", 00:21:28.001 "aliases": [ 00:21:28.001 "665f9ccf-1d4a-485a-a531-321e66eac60c" 00:21:28.001 ], 00:21:28.001 "product_name": "NVMe disk", 00:21:28.001 "block_size": 512, 00:21:28.001 "num_blocks": 2097152, 00:21:28.001 "uuid": "665f9ccf-1d4a-485a-a531-321e66eac60c", 00:21:28.001 "numa_id": 1, 00:21:28.001 "assigned_rate_limits": { 00:21:28.001 "rw_ios_per_sec": 0, 00:21:28.001 "rw_mbytes_per_sec": 0, 00:21:28.001 "r_mbytes_per_sec": 0, 00:21:28.001 "w_mbytes_per_sec": 0 00:21:28.001 }, 00:21:28.001 "claimed": false, 00:21:28.001 "zoned": false, 00:21:28.001 "supported_io_types": { 00:21:28.001 "read": true, 00:21:28.001 "write": true, 00:21:28.001 "unmap": false, 00:21:28.001 "flush": true, 00:21:28.001 "reset": true, 00:21:28.001 "nvme_admin": true, 00:21:28.001 "nvme_io": true, 00:21:28.001 "nvme_io_md": false, 00:21:28.001 "write_zeroes": true, 00:21:28.001 "zcopy": false, 00:21:28.001 "get_zone_info": false, 00:21:28.002 "zone_management": false, 00:21:28.002 "zone_append": false, 00:21:28.002 "compare": true, 00:21:28.002 "compare_and_write": true, 00:21:28.002 "abort": true, 00:21:28.002 "seek_hole": false, 00:21:28.002 "seek_data": false, 00:21:28.002 "copy": true, 00:21:28.002 "nvme_iov_md": false 00:21:28.002 }, 00:21:28.002 "memory_domains": [ 00:21:28.002 { 00:21:28.002 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:21:28.002 "dma_device_type": 0 00:21:28.002 } 00:21:28.002 ], 00:21:28.002 "driver_specific": { 00:21:28.002 "nvme": [ 00:21:28.002 { 00:21:28.002 "trid": { 00:21:28.002 "trtype": "RDMA", 00:21:28.002 "adrfam": "IPv4", 00:21:28.002 "traddr": "192.168.100.8", 00:21:28.002 "trsvcid": "4421", 00:21:28.002 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:28.002 }, 00:21:28.002 "ctrlr_data": { 00:21:28.002 "cntlid": 3, 00:21:28.002 "vendor_id": "0x8086", 00:21:28.002 "model_number": "SPDK bdev Controller", 00:21:28.002 "serial_number": "00000000000000000000", 00:21:28.002 "firmware_revision": "25.01", 00:21:28.002 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:28.002 "oacs": { 00:21:28.002 "security": 0, 00:21:28.002 "format": 0, 00:21:28.002 "firmware": 0, 00:21:28.002 "ns_manage": 0 00:21:28.002 }, 00:21:28.002 "multi_ctrlr": true, 00:21:28.002 "ana_reporting": false 00:21:28.002 }, 00:21:28.002 "vs": { 00:21:28.002 "nvme_version": "1.3" 00:21:28.002 }, 00:21:28.002 "ns_data": { 00:21:28.002 "id": 1, 00:21:28.002 "can_share": true 00:21:28.002 } 00:21:28.002 } 00:21:28.002 ], 00:21:28.002 "mp_policy": "active_passive" 00:21:28.002 } 00:21:28.002 } 00:21:28.002 ] 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.LrTBDkpvGp 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:28.002 rmmod nvme_rdma 00:21:28.002 rmmod nvme_fabrics 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1496545 ']' 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1496545 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 1496545 ']' 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 1496545 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1496545 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1496545' 00:21:28.002 killing process with pid 1496545 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 1496545 00:21:28.002 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 1496545 00:21:28.260 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:28.260 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:28.260 00:21:28.260 real 0m6.475s 00:21:28.260 user 0m2.621s 00:21:28.260 sys 0m4.283s 00:21:28.260 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:28.260 11:03:16 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:28.260 ************************************ 00:21:28.260 END TEST nvmf_async_init 00:21:28.260 ************************************ 00:21:28.261 11:03:16 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:21:28.261 11:03:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:28.261 11:03:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:28.261 11:03:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.261 ************************************ 00:21:28.261 START TEST dma 00:21:28.261 ************************************ 00:21:28.261 11:03:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:21:28.261 * Looking for test storage... 00:21:28.261 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:28.261 11:03:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:28.261 11:03:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:21:28.261 11:03:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:28.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.519 --rc genhtml_branch_coverage=1 00:21:28.519 --rc genhtml_function_coverage=1 00:21:28.519 --rc genhtml_legend=1 00:21:28.519 --rc geninfo_all_blocks=1 00:21:28.519 --rc geninfo_unexecuted_blocks=1 00:21:28.519 00:21:28.519 ' 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:28.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.519 --rc genhtml_branch_coverage=1 00:21:28.519 --rc genhtml_function_coverage=1 00:21:28.519 --rc genhtml_legend=1 00:21:28.519 --rc geninfo_all_blocks=1 00:21:28.519 --rc geninfo_unexecuted_blocks=1 00:21:28.519 00:21:28.519 ' 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:28.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.519 --rc genhtml_branch_coverage=1 00:21:28.519 --rc genhtml_function_coverage=1 00:21:28.519 --rc genhtml_legend=1 00:21:28.519 --rc geninfo_all_blocks=1 00:21:28.519 --rc geninfo_unexecuted_blocks=1 00:21:28.519 00:21:28.519 ' 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:28.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.519 --rc genhtml_branch_coverage=1 00:21:28.519 --rc genhtml_function_coverage=1 00:21:28.519 --rc genhtml_legend=1 00:21:28.519 --rc geninfo_all_blocks=1 00:21:28.519 --rc geninfo_unexecuted_blocks=1 00:21:28.519 00:21:28.519 ' 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:28.519 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:21:28.519 11:03:17 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:21:33.791 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:21:33.791 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:21:33.791 Found net devices under 0000:af:00.0: mlx_0_0 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:21:33.791 Found net devices under 0000:af:00.1: mlx_0_1 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # is_hw=yes 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # rdma_device_init 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:33.791 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:33.792 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:33.792 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:21:33.792 altname enp175s0f0np0 00:21:33.792 altname ens801f0np0 00:21:33.792 inet 192.168.100.8/24 scope global mlx_0_0 00:21:33.792 valid_lft forever preferred_lft forever 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:33.792 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:33.792 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:21:33.792 altname enp175s0f1np1 00:21:33.792 altname ens801f1np1 00:21:33.792 inet 192.168.100.9/24 scope global mlx_0_1 00:21:33.792 valid_lft forever preferred_lft forever 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # return 0 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:33.792 192.168.100.9' 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # head -n 1 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:33.792 192.168.100.9' 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:33.792 192.168.100.9' 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # tail -n +2 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # head -n 1 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # nvmfpid=1499778 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # waitforlisten 1499778 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@833 -- # '[' -z 1499778 ']' 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:33.792 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:33.792 [2024-11-15 11:03:22.527227] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:21:33.792 [2024-11-15 11:03:22.527273] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.792 [2024-11-15 11:03:22.589522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:33.792 [2024-11-15 11:03:22.631320] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.792 [2024-11-15 11:03:22.631357] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.792 [2024-11-15 11:03:22.631364] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.792 [2024-11-15 11:03:22.631370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.792 [2024-11-15 11:03:22.631375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.792 [2024-11-15 11:03:22.632604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.793 [2024-11-15 11:03:22.632608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.052 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:34.052 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@866 -- # return 0 00:21:34.052 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:34.052 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:34.052 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:34.052 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.052 11:03:22 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:21:34.052 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.052 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:34.052 [2024-11-15 11:03:22.790625] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2333b50/0x2338040) succeed. 00:21:34.052 [2024-11-15 11:03:22.800156] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23350a0/0x23796e0) succeed. 00:21:34.052 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.052 11:03:22 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:21:34.052 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.052 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:34.052 Malloc0 00:21:34.052 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.052 11:03:22 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:34.052 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.052 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:34.311 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.311 11:03:22 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:21:34.311 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.311 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:34.311 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.311 11:03:22 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:21:34.311 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.311 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:34.311 [2024-11-15 11:03:22.957033] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:34.311 11:03:22 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.311 11:03:22 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:21:34.311 11:03:22 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:21:34.311 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # config=() 00:21:34.311 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # local subsystem config 00:21:34.312 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:34.312 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:34.312 { 00:21:34.312 "params": { 00:21:34.312 "name": "Nvme$subsystem", 00:21:34.312 "trtype": "$TEST_TRANSPORT", 00:21:34.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.312 "adrfam": "ipv4", 00:21:34.312 "trsvcid": "$NVMF_PORT", 00:21:34.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.312 "hdgst": ${hdgst:-false}, 00:21:34.312 "ddgst": ${ddgst:-false} 00:21:34.312 }, 00:21:34.312 "method": "bdev_nvme_attach_controller" 00:21:34.312 } 00:21:34.312 EOF 00:21:34.312 )") 00:21:34.312 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # cat 00:21:34.312 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # jq . 00:21:34.312 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@585 -- # IFS=, 00:21:34.312 11:03:22 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:34.312 "params": { 00:21:34.312 "name": "Nvme0", 00:21:34.312 "trtype": "rdma", 00:21:34.312 "traddr": "192.168.100.8", 00:21:34.312 "adrfam": "ipv4", 00:21:34.312 "trsvcid": "4420", 00:21:34.312 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:34.312 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:34.312 "hdgst": false, 00:21:34.312 "ddgst": false 00:21:34.312 }, 00:21:34.312 "method": "bdev_nvme_attach_controller" 00:21:34.312 }' 00:21:34.312 [2024-11-15 11:03:23.008022] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:21:34.312 [2024-11-15 11:03:23.008063] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499917 ] 00:21:34.312 [2024-11-15 11:03:23.066255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:34.312 [2024-11-15 11:03:23.108021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:34.312 [2024-11-15 11:03:23.108025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.590 bdev Nvme0n1 reports 1 memory domains 00:21:39.590 bdev Nvme0n1 supports RDMA memory domain 00:21:39.590 Initialization complete, running randrw IO for 5 sec on 2 cores 00:21:39.590 ========================================================================== 00:21:39.590 Latency [us] 00:21:39.590 IOPS MiB/s Average min max 00:21:39.590 Core 2: 20712.39 80.91 771.84 264.09 8748.86 00:21:39.590 Core 3: 20659.60 80.70 773.76 258.95 8896.85 00:21:39.590 ========================================================================== 00:21:39.590 Total : 41371.99 161.61 772.80 258.95 8896.85 00:21:39.590 00:21:39.590 Total operations: 206894, translate 206894 pull_push 0 memzero 0 00:21:39.848 11:03:28 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:21:39.848 11:03:28 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:21:39.848 11:03:28 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:21:39.848 [2024-11-15 11:03:28.527411] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:21:39.848 [2024-11-15 11:03:28.527458] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500844 ] 00:21:39.848 [2024-11-15 11:03:28.585563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:39.848 [2024-11-15 11:03:28.627112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:39.848 [2024-11-15 11:03:28.627116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.115 bdev Malloc0 reports 2 memory domains 00:21:45.115 bdev Malloc0 doesn't support RDMA memory domain 00:21:45.115 Initialization complete, running randrw IO for 5 sec on 2 cores 00:21:45.115 ========================================================================== 00:21:45.115 Latency [us] 00:21:45.115 IOPS MiB/s Average min max 00:21:45.115 Core 2: 13737.36 53.66 1163.97 445.64 1492.41 00:21:45.115 Core 3: 13744.96 53.69 1163.32 455.96 1894.91 00:21:45.115 ========================================================================== 00:21:45.115 Total : 27482.32 107.35 1163.65 445.64 1894.91 00:21:45.115 00:21:45.115 Total operations: 137464, translate 0 pull_push 549856 memzero 0 00:21:45.115 11:03:33 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:21:45.115 11:03:33 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:21:45.115 11:03:33 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:21:45.115 11:03:33 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:21:45.115 Ignoring -M option 00:21:45.115 [2024-11-15 11:03:33.950206] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:21:45.115 [2024-11-15 11:03:33.950256] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501669 ] 00:21:45.373 [2024-11-15 11:03:34.010325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:45.373 [2024-11-15 11:03:34.050011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:45.373 [2024-11-15 11:03:34.050014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.640 bdev 8ad0d5e2-7379-4d71-9cc8-3685782166fa reports 1 memory domains 00:21:50.640 bdev 8ad0d5e2-7379-4d71-9cc8-3685782166fa supports RDMA memory domain 00:21:50.640 Initialization complete, running randread IO for 5 sec on 2 cores 00:21:50.640 ========================================================================== 00:21:50.640 Latency [us] 00:21:50.640 IOPS MiB/s Average min max 00:21:50.640 Core 2: 72576.79 283.50 219.69 84.76 3722.10 00:21:50.640 Core 3: 69944.11 273.22 227.94 80.68 3635.97 00:21:50.640 ========================================================================== 00:21:50.640 Total : 142520.90 556.72 223.74 80.68 3722.10 00:21:50.640 00:21:50.640 Total operations: 712691, translate 0 pull_push 0 memzero 712691 00:21:50.640 11:03:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:21:50.899 [2024-11-15 11:03:39.595226] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:53.569 Initializing NVMe Controllers 00:21:53.569 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:21:53.569 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:21:53.569 Initialization complete. Launching workers. 00:21:53.569 ======================================================== 00:21:53.569 Latency(us) 00:21:53.569 Device Information : IOPS MiB/s Average min max 00:21:53.569 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2024.69 7.91 7957.77 6126.81 8186.45 00:21:53.569 ======================================================== 00:21:53.569 Total : 2024.69 7.91 7957.77 6126.81 8186.45 00:21:53.569 00:21:53.569 11:03:41 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:21:53.569 11:03:41 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:21:53.569 11:03:41 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:21:53.569 11:03:41 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:21:53.569 [2024-11-15 11:03:41.952446] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:21:53.569 [2024-11-15 11:03:41.952495] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502949 ] 00:21:53.569 [2024-11-15 11:03:42.012745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:53.569 [2024-11-15 11:03:42.054099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:53.569 [2024-11-15 11:03:42.054102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.837 bdev 80585e42-933c-490d-80f1-496296291711 reports 1 memory domains 00:21:58.837 bdev 80585e42-933c-490d-80f1-496296291711 supports RDMA memory domain 00:21:58.837 Initialization complete, running randrw IO for 5 sec on 2 cores 00:21:58.837 ========================================================================== 00:21:58.837 Latency [us] 00:21:58.837 IOPS MiB/s Average min max 00:21:58.837 Core 2: 18093.89 70.68 883.54 51.26 13487.67 00:21:58.837 Core 3: 18278.27 71.40 874.64 19.68 13217.86 00:21:58.837 ========================================================================== 00:21:58.837 Total : 36372.16 142.08 879.07 19.68 13487.67 00:21:58.837 00:21:58.837 Total operations: 181882, translate 181781 pull_push 0 memzero 101 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:58.837 rmmod nvme_rdma 00:21:58.837 rmmod nvme_fabrics 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@517 -- # '[' -n 1499778 ']' 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # killprocess 1499778 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@952 -- # '[' -z 1499778 ']' 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # kill -0 1499778 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@957 -- # uname 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1499778 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1499778' 00:21:58.837 killing process with pid 1499778 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@971 -- # kill 1499778 00:21:58.837 11:03:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@976 -- # wait 1499778 00:21:59.096 11:03:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:59.096 11:03:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:59.096 00:21:59.096 real 0m30.839s 00:21:59.096 user 1m34.657s 00:21:59.096 sys 0m4.929s 00:21:59.096 11:03:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:59.096 11:03:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:59.096 ************************************ 00:21:59.096 END TEST dma 00:21:59.096 ************************************ 00:21:59.096 11:03:47 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:21:59.096 11:03:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:59.096 11:03:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:59.096 11:03:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.096 ************************************ 00:21:59.096 START TEST nvmf_identify 00:21:59.096 ************************************ 00:21:59.096 11:03:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:21:59.355 * Looking for test storage... 00:21:59.355 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:59.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.355 --rc genhtml_branch_coverage=1 00:21:59.355 --rc genhtml_function_coverage=1 00:21:59.355 --rc genhtml_legend=1 00:21:59.355 --rc geninfo_all_blocks=1 00:21:59.355 --rc geninfo_unexecuted_blocks=1 00:21:59.355 00:21:59.355 ' 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:59.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.355 --rc genhtml_branch_coverage=1 00:21:59.355 --rc genhtml_function_coverage=1 00:21:59.355 --rc genhtml_legend=1 00:21:59.355 --rc geninfo_all_blocks=1 00:21:59.355 --rc geninfo_unexecuted_blocks=1 00:21:59.355 00:21:59.355 ' 00:21:59.355 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:59.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.356 --rc genhtml_branch_coverage=1 00:21:59.356 --rc genhtml_function_coverage=1 00:21:59.356 --rc genhtml_legend=1 00:21:59.356 --rc geninfo_all_blocks=1 00:21:59.356 --rc geninfo_unexecuted_blocks=1 00:21:59.356 00:21:59.356 ' 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:59.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.356 --rc genhtml_branch_coverage=1 00:21:59.356 --rc genhtml_function_coverage=1 00:21:59.356 --rc genhtml_legend=1 00:21:59.356 --rc geninfo_all_blocks=1 00:21:59.356 --rc geninfo_unexecuted_blocks=1 00:21:59.356 00:21:59.356 ' 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:59.356 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:59.356 11:03:48 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:04.622 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:04.622 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:04.622 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:04.622 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:04.622 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:04.622 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:04.622 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:04.622 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:04.622 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:04.622 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:04.622 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:04.622 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:04.622 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:04.622 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:04.622 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:04.622 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:04.622 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:04.622 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:04.622 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:04.622 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:22:04.623 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:22:04.623 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:22:04.623 Found net devices under 0000:af:00.0: mlx_0_0 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:22:04.623 Found net devices under 0000:af:00.1: mlx_0_1 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # rdma_device_init 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@530 -- # allocate_nic_ips 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:04.623 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:04.623 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:22:04.623 altname enp175s0f0np0 00:22:04.623 altname ens801f0np0 00:22:04.623 inet 192.168.100.8/24 scope global mlx_0_0 00:22:04.623 valid_lft forever preferred_lft forever 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:04.623 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:04.623 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:22:04.623 altname enp175s0f1np1 00:22:04.623 altname ens801f1np1 00:22:04.623 inet 192.168.100.9/24 scope global mlx_0_1 00:22:04.623 valid_lft forever preferred_lft forever 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:04.623 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:04.624 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:04.624 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:04.624 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:04.624 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:04.624 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:04.624 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:04.624 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:04.624 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:22:04.624 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:04.624 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:22:04.883 192.168.100.9' 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:22:04.883 192.168.100.9' 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # head -n 1 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:22:04.883 192.168.100.9' 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # tail -n +2 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # head -n 1 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1506984 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1506984 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 1506984 ']' 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.883 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:04.884 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:04.884 [2024-11-15 11:03:53.615115] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:22:04.884 [2024-11-15 11:03:53.615158] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.884 [2024-11-15 11:03:53.677635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:04.884 [2024-11-15 11:03:53.723861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.884 [2024-11-15 11:03:53.723893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.884 [2024-11-15 11:03:53.723900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.884 [2024-11-15 11:03:53.723907] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.884 [2024-11-15 11:03:53.723912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.884 [2024-11-15 11:03:53.725390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.884 [2024-11-15 11:03:53.725488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.884 [2024-11-15 11:03:53.725585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:04.884 [2024-11-15 11:03:53.725587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.143 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:05.143 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:22:05.143 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:05.143 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.143 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.143 [2024-11-15 11:03:53.846925] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1050230/0x1054720) succeed. 00:22:05.143 [2024-11-15 11:03:53.856159] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10518c0/0x1095dc0) succeed. 00:22:05.143 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.143 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:05.143 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:05.143 11:03:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.143 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:05.143 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.143 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.406 Malloc0 00:22:05.406 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.406 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:05.406 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.406 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.406 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.406 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:05.406 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.406 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.406 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.406 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:05.406 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.406 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.406 [2024-11-15 11:03:54.075356] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:05.406 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.406 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:22:05.406 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.406 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.406 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.406 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:05.406 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.406 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.406 [ 00:22:05.406 { 00:22:05.406 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:05.406 "subtype": "Discovery", 00:22:05.406 "listen_addresses": [ 00:22:05.406 { 00:22:05.406 "trtype": "RDMA", 00:22:05.406 "adrfam": "IPv4", 00:22:05.406 "traddr": "192.168.100.8", 00:22:05.406 "trsvcid": "4420" 00:22:05.406 } 00:22:05.406 ], 00:22:05.406 "allow_any_host": true, 00:22:05.406 "hosts": [] 00:22:05.406 }, 00:22:05.406 { 00:22:05.406 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.406 "subtype": "NVMe", 00:22:05.406 "listen_addresses": [ 00:22:05.406 { 00:22:05.406 "trtype": "RDMA", 00:22:05.406 "adrfam": "IPv4", 00:22:05.406 "traddr": "192.168.100.8", 00:22:05.406 "trsvcid": "4420" 00:22:05.406 } 00:22:05.406 ], 00:22:05.406 "allow_any_host": true, 00:22:05.406 "hosts": [], 00:22:05.406 "serial_number": "SPDK00000000000001", 00:22:05.406 "model_number": "SPDK bdev Controller", 00:22:05.406 "max_namespaces": 32, 00:22:05.406 "min_cntlid": 1, 00:22:05.406 "max_cntlid": 65519, 00:22:05.406 "namespaces": [ 00:22:05.406 { 00:22:05.406 "nsid": 1, 00:22:05.406 "bdev_name": "Malloc0", 00:22:05.406 "name": "Malloc0", 00:22:05.406 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:05.406 "eui64": "ABCDEF0123456789", 00:22:05.406 "uuid": "69a14679-9d8e-41d3-bc15-0e53f4699852" 00:22:05.406 } 00:22:05.406 ] 00:22:05.406 } 00:22:05.406 ] 00:22:05.406 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.406 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:05.406 [2024-11-15 11:03:54.129367] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:22:05.406 [2024-11-15 11:03:54.129415] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507225 ] 00:22:05.406 [2024-11-15 11:03:54.190472] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:05.406 [2024-11-15 11:03:54.190553] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:22:05.406 [2024-11-15 11:03:54.190569] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:22:05.406 [2024-11-15 11:03:54.190573] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:22:05.406 [2024-11-15 11:03:54.190608] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:05.406 [2024-11-15 11:03:54.206798] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:22:05.406 [2024-11-15 11:03:54.221656] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:22:05.406 [2024-11-15 11:03:54.221665] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:22:05.406 [2024-11-15 11:03:54.221672] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x189200 00:22:05.406 [2024-11-15 11:03:54.221677] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221682] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221687] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221691] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221696] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221700] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221705] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221709] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221714] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221718] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221723] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221727] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221732] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221736] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221740] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221745] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221752] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221757] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221761] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221765] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221770] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221774] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221779] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221783] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221788] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221792] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221797] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221801] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221806] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221810] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221814] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:22:05.407 [2024-11-15 11:03:54.221819] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:22:05.407 [2024-11-15 11:03:54.221822] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:22:05.407 [2024-11-15 11:03:54.221837] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.221849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x189200 00:22:05.407 [2024-11-15 11:03:54.227168] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.407 [2024-11-15 11:03:54.227177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:22:05.407 [2024-11-15 11:03:54.227184] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.227189] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:05.407 [2024-11-15 11:03:54.227196] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:05.407 [2024-11-15 11:03:54.227201] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:05.407 [2024-11-15 11:03:54.227213] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.227219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.407 [2024-11-15 11:03:54.227246] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.407 [2024-11-15 11:03:54.227251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:22:05.407 [2024-11-15 11:03:54.227256] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:05.407 [2024-11-15 11:03:54.227263] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.227268] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:05.407 [2024-11-15 11:03:54.227274] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.227280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.407 [2024-11-15 11:03:54.227301] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.407 [2024-11-15 11:03:54.227306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:22:05.407 [2024-11-15 11:03:54.227311] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:05.407 [2024-11-15 11:03:54.227315] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.227320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:05.407 [2024-11-15 11:03:54.227326] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.227332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.407 [2024-11-15 11:03:54.227357] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.407 [2024-11-15 11:03:54.227361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:05.407 [2024-11-15 11:03:54.227366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:05.407 [2024-11-15 11:03:54.227371] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.227377] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.227383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.407 [2024-11-15 11:03:54.227402] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.407 [2024-11-15 11:03:54.227406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:05.407 [2024-11-15 11:03:54.227411] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:05.407 [2024-11-15 11:03:54.227415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:05.407 [2024-11-15 11:03:54.227420] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.227425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:05.407 [2024-11-15 11:03:54.227533] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:05.407 [2024-11-15 11:03:54.227537] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:05.407 [2024-11-15 11:03:54.227545] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.407 [2024-11-15 11:03:54.227551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.407 [2024-11-15 11:03:54.227570] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.408 [2024-11-15 11:03:54.227575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:05.408 [2024-11-15 11:03:54.227579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:05.408 [2024-11-15 11:03:54.227584] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x189200 00:22:05.408 [2024-11-15 11:03:54.227590] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.408 [2024-11-15 11:03:54.227596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.408 [2024-11-15 11:03:54.227612] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.408 [2024-11-15 11:03:54.227616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:22:05.408 [2024-11-15 11:03:54.227621] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:05.408 [2024-11-15 11:03:54.227625] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:05.408 [2024-11-15 11:03:54.227629] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x189200 00:22:05.408 [2024-11-15 11:03:54.227634] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:05.408 [2024-11-15 11:03:54.227641] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:05.408 [2024-11-15 11:03:54.227650] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.408 [2024-11-15 11:03:54.227656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x189200 00:22:05.408 [2024-11-15 11:03:54.227690] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.408 [2024-11-15 11:03:54.227695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:05.408 [2024-11-15 11:03:54.227702] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:05.408 [2024-11-15 11:03:54.227707] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:05.408 [2024-11-15 11:03:54.227711] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:05.408 [2024-11-15 11:03:54.227715] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:05.408 [2024-11-15 11:03:54.227720] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:05.408 [2024-11-15 11:03:54.227724] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:05.408 [2024-11-15 11:03:54.227728] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x189200 00:22:05.408 [2024-11-15 11:03:54.227736] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:05.408 [2024-11-15 11:03:54.227743] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.408 [2024-11-15 11:03:54.227749] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.408 [2024-11-15 11:03:54.227769] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.408 [2024-11-15 11:03:54.227773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:05.408 [2024-11-15 11:03:54.227781] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x189200 00:22:05.408 [2024-11-15 11:03:54.227786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.408 [2024-11-15 11:03:54.227792] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x189200 00:22:05.408 [2024-11-15 11:03:54.227797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.408 [2024-11-15 11:03:54.227802] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.408 [2024-11-15 11:03:54.227807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.408 [2024-11-15 11:03:54.227813] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x189200 00:22:05.408 [2024-11-15 11:03:54.227818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.408 [2024-11-15 11:03:54.227822] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:05.408 [2024-11-15 11:03:54.227826] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x189200 00:22:05.408 [2024-11-15 11:03:54.227835] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:05.408 [2024-11-15 11:03:54.227841] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.408 [2024-11-15 11:03:54.227846] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.408 [2024-11-15 11:03:54.227870] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.408 [2024-11-15 11:03:54.227874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:22:05.408 [2024-11-15 11:03:54.227879] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:05.408 [2024-11-15 11:03:54.227883] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:05.408 [2024-11-15 11:03:54.227888] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x189200 00:22:05.408 [2024-11-15 11:03:54.227895] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.408 [2024-11-15 11:03:54.227901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x189200 00:22:05.408 [2024-11-15 11:03:54.227925] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.408 [2024-11-15 11:03:54.227929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:05.408 [2024-11-15 11:03:54.227935] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x189200 00:22:05.408 [2024-11-15 11:03:54.227942] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:05.408 [2024-11-15 11:03:54.227966] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.408 [2024-11-15 11:03:54.227974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x189200 00:22:05.408 [2024-11-15 11:03:54.227981] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x189200 00:22:05.408 [2024-11-15 11:03:54.227986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.408 [2024-11-15 11:03:54.228004] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.408 [2024-11-15 11:03:54.228009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:05.408 [2024-11-15 11:03:54.228018] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x189200 00:22:05.408 [2024-11-15 11:03:54.228024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x189200 00:22:05.408 [2024-11-15 11:03:54.228029] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x189200 00:22:05.408 [2024-11-15 11:03:54.228033] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.408 [2024-11-15 11:03:54.228037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:05.408 [2024-11-15 11:03:54.228042] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x189200 00:22:05.408 [2024-11-15 11:03:54.228059] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.408 [2024-11-15 11:03:54.228063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:05.408 [2024-11-15 11:03:54.228071] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x189200 00:22:05.408 [2024-11-15 11:03:54.228077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x189200 00:22:05.408 [2024-11-15 11:03:54.228081] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x189200 00:22:05.408 [2024-11-15 11:03:54.228103] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.408 [2024-11-15 11:03:54.228107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:05.408 [2024-11-15 11:03:54.228116] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x189200 00:22:05.408 ===================================================== 00:22:05.408 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:05.408 ===================================================== 00:22:05.408 Controller Capabilities/Features 00:22:05.409 ================================ 00:22:05.409 Vendor ID: 0000 00:22:05.409 Subsystem Vendor ID: 0000 00:22:05.409 Serial Number: .................... 00:22:05.409 Model Number: ........................................ 00:22:05.409 Firmware Version: 25.01 00:22:05.409 Recommended Arb Burst: 0 00:22:05.409 IEEE OUI Identifier: 00 00 00 00:22:05.409 Multi-path I/O 00:22:05.409 May have multiple subsystem ports: No 00:22:05.409 May have multiple controllers: No 00:22:05.409 Associated with SR-IOV VF: No 00:22:05.409 Max Data Transfer Size: 131072 00:22:05.409 Max Number of Namespaces: 0 00:22:05.409 Max Number of I/O Queues: 1024 00:22:05.409 NVMe Specification Version (VS): 1.3 00:22:05.409 NVMe Specification Version (Identify): 1.3 00:22:05.409 Maximum Queue Entries: 128 00:22:05.409 Contiguous Queues Required: Yes 00:22:05.409 Arbitration Mechanisms Supported 00:22:05.409 Weighted Round Robin: Not Supported 00:22:05.409 Vendor Specific: Not Supported 00:22:05.409 Reset Timeout: 15000 ms 00:22:05.409 Doorbell Stride: 4 bytes 00:22:05.409 NVM Subsystem Reset: Not Supported 00:22:05.409 Command Sets Supported 00:22:05.409 NVM Command Set: Supported 00:22:05.409 Boot Partition: Not Supported 00:22:05.409 Memory Page Size Minimum: 4096 bytes 00:22:05.409 Memory Page Size Maximum: 4096 bytes 00:22:05.409 Persistent Memory Region: Not Supported 00:22:05.409 Optional Asynchronous Events Supported 00:22:05.409 Namespace Attribute Notices: Not Supported 00:22:05.409 Firmware Activation Notices: Not Supported 00:22:05.409 ANA Change Notices: Not Supported 00:22:05.409 PLE Aggregate Log Change Notices: Not Supported 00:22:05.409 LBA Status Info Alert Notices: Not Supported 00:22:05.409 EGE Aggregate Log Change Notices: Not Supported 00:22:05.409 Normal NVM Subsystem Shutdown event: Not Supported 00:22:05.409 Zone Descriptor Change Notices: Not Supported 00:22:05.409 Discovery Log Change Notices: Supported 00:22:05.409 Controller Attributes 00:22:05.409 128-bit Host Identifier: Not Supported 00:22:05.409 Non-Operational Permissive Mode: Not Supported 00:22:05.409 NVM Sets: Not Supported 00:22:05.409 Read Recovery Levels: Not Supported 00:22:05.409 Endurance Groups: Not Supported 00:22:05.409 Predictable Latency Mode: Not Supported 00:22:05.409 Traffic Based Keep ALive: Not Supported 00:22:05.409 Namespace Granularity: Not Supported 00:22:05.409 SQ Associations: Not Supported 00:22:05.409 UUID List: Not Supported 00:22:05.409 Multi-Domain Subsystem: Not Supported 00:22:05.409 Fixed Capacity Management: Not Supported 00:22:05.409 Variable Capacity Management: Not Supported 00:22:05.409 Delete Endurance Group: Not Supported 00:22:05.409 Delete NVM Set: Not Supported 00:22:05.409 Extended LBA Formats Supported: Not Supported 00:22:05.409 Flexible Data Placement Supported: Not Supported 00:22:05.409 00:22:05.409 Controller Memory Buffer Support 00:22:05.409 ================================ 00:22:05.409 Supported: No 00:22:05.409 00:22:05.409 Persistent Memory Region Support 00:22:05.409 ================================ 00:22:05.409 Supported: No 00:22:05.409 00:22:05.409 Admin Command Set Attributes 00:22:05.409 ============================ 00:22:05.409 Security Send/Receive: Not Supported 00:22:05.409 Format NVM: Not Supported 00:22:05.409 Firmware Activate/Download: Not Supported 00:22:05.409 Namespace Management: Not Supported 00:22:05.409 Device Self-Test: Not Supported 00:22:05.409 Directives: Not Supported 00:22:05.409 NVMe-MI: Not Supported 00:22:05.409 Virtualization Management: Not Supported 00:22:05.409 Doorbell Buffer Config: Not Supported 00:22:05.409 Get LBA Status Capability: Not Supported 00:22:05.409 Command & Feature Lockdown Capability: Not Supported 00:22:05.409 Abort Command Limit: 1 00:22:05.409 Async Event Request Limit: 4 00:22:05.409 Number of Firmware Slots: N/A 00:22:05.409 Firmware Slot 1 Read-Only: N/A 00:22:05.409 Firmware Activation Without Reset: N/A 00:22:05.409 Multiple Update Detection Support: N/A 00:22:05.409 Firmware Update Granularity: No Information Provided 00:22:05.409 Per-Namespace SMART Log: No 00:22:05.409 Asymmetric Namespace Access Log Page: Not Supported 00:22:05.409 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:05.409 Command Effects Log Page: Not Supported 00:22:05.409 Get Log Page Extended Data: Supported 00:22:05.409 Telemetry Log Pages: Not Supported 00:22:05.409 Persistent Event Log Pages: Not Supported 00:22:05.409 Supported Log Pages Log Page: May Support 00:22:05.409 Commands Supported & Effects Log Page: Not Supported 00:22:05.409 Feature Identifiers & Effects Log Page:May Support 00:22:05.409 NVMe-MI Commands & Effects Log Page: May Support 00:22:05.409 Data Area 4 for Telemetry Log: Not Supported 00:22:05.409 Error Log Page Entries Supported: 128 00:22:05.409 Keep Alive: Not Supported 00:22:05.409 00:22:05.409 NVM Command Set Attributes 00:22:05.409 ========================== 00:22:05.409 Submission Queue Entry Size 00:22:05.409 Max: 1 00:22:05.409 Min: 1 00:22:05.409 Completion Queue Entry Size 00:22:05.409 Max: 1 00:22:05.409 Min: 1 00:22:05.409 Number of Namespaces: 0 00:22:05.409 Compare Command: Not Supported 00:22:05.409 Write Uncorrectable Command: Not Supported 00:22:05.409 Dataset Management Command: Not Supported 00:22:05.409 Write Zeroes Command: Not Supported 00:22:05.409 Set Features Save Field: Not Supported 00:22:05.409 Reservations: Not Supported 00:22:05.409 Timestamp: Not Supported 00:22:05.409 Copy: Not Supported 00:22:05.409 Volatile Write Cache: Not Present 00:22:05.409 Atomic Write Unit (Normal): 1 00:22:05.409 Atomic Write Unit (PFail): 1 00:22:05.409 Atomic Compare & Write Unit: 1 00:22:05.409 Fused Compare & Write: Supported 00:22:05.409 Scatter-Gather List 00:22:05.409 SGL Command Set: Supported 00:22:05.409 SGL Keyed: Supported 00:22:05.409 SGL Bit Bucket Descriptor: Not Supported 00:22:05.409 SGL Metadata Pointer: Not Supported 00:22:05.409 Oversized SGL: Not Supported 00:22:05.409 SGL Metadata Address: Not Supported 00:22:05.409 SGL Offset: Supported 00:22:05.409 Transport SGL Data Block: Not Supported 00:22:05.409 Replay Protected Memory Block: Not Supported 00:22:05.409 00:22:05.409 Firmware Slot Information 00:22:05.409 ========================= 00:22:05.409 Active slot: 0 00:22:05.409 00:22:05.409 00:22:05.409 Error Log 00:22:05.409 ========= 00:22:05.409 00:22:05.409 Active Namespaces 00:22:05.409 ================= 00:22:05.409 Discovery Log Page 00:22:05.409 ================== 00:22:05.409 Generation Counter: 2 00:22:05.409 Number of Records: 2 00:22:05.409 Record Format: 0 00:22:05.409 00:22:05.409 Discovery Log Entry 0 00:22:05.409 ---------------------- 00:22:05.409 Transport Type: 1 (RDMA) 00:22:05.409 Address Family: 1 (IPv4) 00:22:05.409 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:05.409 Entry Flags: 00:22:05.409 Duplicate Returned Information: 1 00:22:05.409 Explicit Persistent Connection Support for Discovery: 1 00:22:05.409 Transport Requirements: 00:22:05.409 Secure Channel: Not Required 00:22:05.409 Port ID: 0 (0x0000) 00:22:05.409 Controller ID: 65535 (0xffff) 00:22:05.409 Admin Max SQ Size: 128 00:22:05.409 Transport Service Identifier: 4420 00:22:05.409 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:05.409 Transport Address: 192.168.100.8 00:22:05.409 Transport Specific Address Subtype - RDMA 00:22:05.409 RDMA QP Service Type: 1 (Reliable Connected) 00:22:05.409 RDMA Provider Type: 1 (No provider specified) 00:22:05.410 RDMA CM Service: 1 (RDMA_CM) 00:22:05.410 Discovery Log Entry 1 00:22:05.410 ---------------------- 00:22:05.410 Transport Type: 1 (RDMA) 00:22:05.410 Address Family: 1 (IPv4) 00:22:05.410 Subsystem Type: 2 (NVM Subsystem) 00:22:05.410 Entry Flags: 00:22:05.410 Duplicate Returned Information: 0 00:22:05.410 Explicit Persistent Connection Support for Discovery: 0 00:22:05.410 Transport Requirements: 00:22:05.410 Secure Channel: Not Required 00:22:05.410 Port ID: 0 (0x0000) 00:22:05.410 Controller ID: 65535 (0xffff) 00:22:05.410 Admin Max SQ Size: [2024-11-15 11:03:54.228182] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:05.410 [2024-11-15 11:03:54.228191] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 17341 doesn't match qid 00:22:05.410 [2024-11-15 11:03:54.228203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32647 cdw0:458d9c10 sqhd:8320 p:0 m:0 dnr:0 00:22:05.410 [2024-11-15 11:03:54.228208] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 17341 doesn't match qid 00:22:05.410 [2024-11-15 11:03:54.228214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32647 cdw0:458d9c10 sqhd:8320 p:0 m:0 dnr:0 00:22:05.410 [2024-11-15 11:03:54.228218] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 17341 doesn't match qid 00:22:05.410 [2024-11-15 11:03:54.228224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32647 cdw0:458d9c10 sqhd:8320 p:0 m:0 dnr:0 00:22:05.410 [2024-11-15 11:03:54.228229] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 17341 doesn't match qid 00:22:05.410 [2024-11-15 11:03:54.228234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32647 cdw0:458d9c10 sqhd:8320 p:0 m:0 dnr:0 00:22:05.410 [2024-11-15 11:03:54.228243] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x189200 00:22:05.410 [2024-11-15 11:03:54.228249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.410 [2024-11-15 11:03:54.228267] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.410 [2024-11-15 11:03:54.228271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:22:05.410 [2024-11-15 11:03:54.228278] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.410 [2024-11-15 11:03:54.228284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.410 [2024-11-15 11:03:54.228289] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x189200 00:22:05.410 [2024-11-15 11:03:54.228305] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.410 [2024-11-15 11:03:54.228310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:05.410 [2024-11-15 11:03:54.228315] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:05.410 [2024-11-15 11:03:54.228319] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:05.410 [2024-11-15 11:03:54.228323] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x189200 00:22:05.410 [2024-11-15 11:03:54.228330] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.410 [2024-11-15 11:03:54.228336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.410 [2024-11-15 11:03:54.228357] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.410 [2024-11-15 11:03:54.228362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:22:05.410 [2024-11-15 11:03:54.228367] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x189200 00:22:05.410 [2024-11-15 11:03:54.228374] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.410 [2024-11-15 11:03:54.228380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.410 [2024-11-15 11:03:54.228400] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.410 [2024-11-15 11:03:54.228404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:22:05.410 [2024-11-15 11:03:54.228409] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x189200 00:22:05.410 [2024-11-15 11:03:54.228417] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.410 [2024-11-15 11:03:54.228423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.410 [2024-11-15 11:03:54.228445] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.410 [2024-11-15 11:03:54.228450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:22:05.410 [2024-11-15 11:03:54.228454] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x189200 00:22:05.410 [2024-11-15 11:03:54.228462] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.410 [2024-11-15 11:03:54.228468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.410 [2024-11-15 11:03:54.228488] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.410 [2024-11-15 11:03:54.228492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:22:05.410 [2024-11-15 11:03:54.228497] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x189200 00:22:05.410 [2024-11-15 11:03:54.228504] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.410 [2024-11-15 11:03:54.228510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.410 [2024-11-15 11:03:54.228527] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.410 [2024-11-15 11:03:54.228531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:22:05.410 [2024-11-15 11:03:54.228536] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x189200 00:22:05.410 [2024-11-15 11:03:54.228543] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.410 [2024-11-15 11:03:54.228550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.410 [2024-11-15 11:03:54.228566] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.410 [2024-11-15 11:03:54.228570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:22:05.410 [2024-11-15 11:03:54.228575] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x189200 00:22:05.410 [2024-11-15 11:03:54.228583] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.410 [2024-11-15 11:03:54.228589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.410 [2024-11-15 11:03:54.228605] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.410 [2024-11-15 11:03:54.228609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:22:05.410 [2024-11-15 11:03:54.228614] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x189200 00:22:05.410 [2024-11-15 11:03:54.228621] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.410 [2024-11-15 11:03:54.228627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.410 [2024-11-15 11:03:54.228648] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.410 [2024-11-15 11:03:54.228652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:22:05.410 [2024-11-15 11:03:54.228657] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x189200 00:22:05.410 [2024-11-15 11:03:54.228664] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.410 [2024-11-15 11:03:54.228670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.410 [2024-11-15 11:03:54.228686] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.411 [2024-11-15 11:03:54.228691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:22:05.411 [2024-11-15 11:03:54.228695] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.228702] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.228708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.411 [2024-11-15 11:03:54.228729] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.411 [2024-11-15 11:03:54.228734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:22:05.411 [2024-11-15 11:03:54.228738] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.228745] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.228751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.411 [2024-11-15 11:03:54.228769] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.411 [2024-11-15 11:03:54.228773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:22:05.411 [2024-11-15 11:03:54.228778] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.228785] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.228791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.411 [2024-11-15 11:03:54.228807] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.411 [2024-11-15 11:03:54.228811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:22:05.411 [2024-11-15 11:03:54.228816] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.228823] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.228829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.411 [2024-11-15 11:03:54.228843] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.411 [2024-11-15 11:03:54.228848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:22:05.411 [2024-11-15 11:03:54.228852] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.228859] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.228865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.411 [2024-11-15 11:03:54.228882] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.411 [2024-11-15 11:03:54.228886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:22:05.411 [2024-11-15 11:03:54.228890] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.228898] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.228904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.411 [2024-11-15 11:03:54.228923] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.411 [2024-11-15 11:03:54.228927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:22:05.411 [2024-11-15 11:03:54.228932] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.228939] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.228946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.411 [2024-11-15 11:03:54.228969] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.411 [2024-11-15 11:03:54.228973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:22:05.411 [2024-11-15 11:03:54.228978] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.228985] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.228991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.411 [2024-11-15 11:03:54.229008] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.411 [2024-11-15 11:03:54.229013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:22:05.411 [2024-11-15 11:03:54.229017] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.229024] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.229030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.411 [2024-11-15 11:03:54.229049] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.411 [2024-11-15 11:03:54.229054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:22:05.411 [2024-11-15 11:03:54.229058] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.229065] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.229071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.411 [2024-11-15 11:03:54.229089] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.411 [2024-11-15 11:03:54.229093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:22:05.411 [2024-11-15 11:03:54.229098] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.229105] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.229111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.411 [2024-11-15 11:03:54.229128] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.411 [2024-11-15 11:03:54.229133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:22:05.411 [2024-11-15 11:03:54.229137] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.229144] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.229150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.411 [2024-11-15 11:03:54.229172] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.411 [2024-11-15 11:03:54.229177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:22:05.411 [2024-11-15 11:03:54.229182] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.229189] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.411 [2024-11-15 11:03:54.229196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.411 [2024-11-15 11:03:54.229217] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.411 [2024-11-15 11:03:54.229222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:22:05.411 [2024-11-15 11:03:54.229226] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229234] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.412 [2024-11-15 11:03:54.229260] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.412 [2024-11-15 11:03:54.229265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:22:05.412 [2024-11-15 11:03:54.229269] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229277] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.412 [2024-11-15 11:03:54.229302] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.412 [2024-11-15 11:03:54.229306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:22:05.412 [2024-11-15 11:03:54.229311] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229318] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.412 [2024-11-15 11:03:54.229343] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.412 [2024-11-15 11:03:54.229347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:22:05.412 [2024-11-15 11:03:54.229352] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229359] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.412 [2024-11-15 11:03:54.229381] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.412 [2024-11-15 11:03:54.229386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:22:05.412 [2024-11-15 11:03:54.229390] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229397] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.412 [2024-11-15 11:03:54.229424] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.412 [2024-11-15 11:03:54.229428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:22:05.412 [2024-11-15 11:03:54.229433] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229443] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.412 [2024-11-15 11:03:54.229466] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.412 [2024-11-15 11:03:54.229470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:22:05.412 [2024-11-15 11:03:54.229474] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229482] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.412 [2024-11-15 11:03:54.229505] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.412 [2024-11-15 11:03:54.229509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:22:05.412 [2024-11-15 11:03:54.229514] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229521] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.412 [2024-11-15 11:03:54.229543] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.412 [2024-11-15 11:03:54.229547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:22:05.412 [2024-11-15 11:03:54.229552] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229559] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.412 [2024-11-15 11:03:54.229583] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.412 [2024-11-15 11:03:54.229587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:22:05.412 [2024-11-15 11:03:54.229592] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229599] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.412 [2024-11-15 11:03:54.229623] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.412 [2024-11-15 11:03:54.229627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:22:05.412 [2024-11-15 11:03:54.229631] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229639] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.412 [2024-11-15 11:03:54.229665] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.412 [2024-11-15 11:03:54.229670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:22:05.412 [2024-11-15 11:03:54.229674] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229683] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.412 [2024-11-15 11:03:54.229708] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.412 [2024-11-15 11:03:54.229712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:22:05.412 [2024-11-15 11:03:54.229717] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229724] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.412 [2024-11-15 11:03:54.229746] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.412 [2024-11-15 11:03:54.229750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:22:05.412 [2024-11-15 11:03:54.229755] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229762] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.412 [2024-11-15 11:03:54.229784] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.412 [2024-11-15 11:03:54.229788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:22:05.412 [2024-11-15 11:03:54.229793] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229800] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.412 [2024-11-15 11:03:54.229806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.412 [2024-11-15 11:03:54.229824] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.413 [2024-11-15 11:03:54.229828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:22:05.413 [2024-11-15 11:03:54.229832] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.229840] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.229846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.413 [2024-11-15 11:03:54.229867] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.413 [2024-11-15 11:03:54.229871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:22:05.413 [2024-11-15 11:03:54.229875] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.229882] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.229888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.413 [2024-11-15 11:03:54.229909] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.413 [2024-11-15 11:03:54.229914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:22:05.413 [2024-11-15 11:03:54.229920] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.229927] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.229933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.413 [2024-11-15 11:03:54.229951] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.413 [2024-11-15 11:03:54.229955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:22:05.413 [2024-11-15 11:03:54.229960] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.229967] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.229973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.413 [2024-11-15 11:03:54.229989] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.413 [2024-11-15 11:03:54.229993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:22:05.413 [2024-11-15 11:03:54.229998] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.230005] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.230011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.413 [2024-11-15 11:03:54.230029] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.413 [2024-11-15 11:03:54.230033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:22:05.413 [2024-11-15 11:03:54.230037] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.230045] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.230051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.413 [2024-11-15 11:03:54.230070] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.413 [2024-11-15 11:03:54.230074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:22:05.413 [2024-11-15 11:03:54.230079] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.230086] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.230092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.413 [2024-11-15 11:03:54.230110] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.413 [2024-11-15 11:03:54.230114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:22:05.413 [2024-11-15 11:03:54.230119] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.230125] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.230132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.413 [2024-11-15 11:03:54.230151] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.413 [2024-11-15 11:03:54.230155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:22:05.413 [2024-11-15 11:03:54.230165] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.230172] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.230178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.413 [2024-11-15 11:03:54.230201] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.413 [2024-11-15 11:03:54.230205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:22:05.413 [2024-11-15 11:03:54.230210] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.230217] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.230223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.413 [2024-11-15 11:03:54.230242] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.413 [2024-11-15 11:03:54.230247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:22:05.413 [2024-11-15 11:03:54.230251] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.230258] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.230264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.413 [2024-11-15 11:03:54.230287] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.413 [2024-11-15 11:03:54.230291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:22:05.413 [2024-11-15 11:03:54.230295] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.230302] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.230308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.413 [2024-11-15 11:03:54.230326] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.413 [2024-11-15 11:03:54.230331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:22:05.413 [2024-11-15 11:03:54.230335] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.230342] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.230348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.413 [2024-11-15 11:03:54.230368] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.413 [2024-11-15 11:03:54.230372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:22:05.413 [2024-11-15 11:03:54.230377] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x189200 00:22:05.413 [2024-11-15 11:03:54.230383] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.414 [2024-11-15 11:03:54.230407] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.414 [2024-11-15 11:03:54.230413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:22:05.414 [2024-11-15 11:03:54.230418] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230425] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.414 [2024-11-15 11:03:54.230452] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.414 [2024-11-15 11:03:54.230456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:22:05.414 [2024-11-15 11:03:54.230461] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230468] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.414 [2024-11-15 11:03:54.230490] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.414 [2024-11-15 11:03:54.230494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:22:05.414 [2024-11-15 11:03:54.230498] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230505] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.414 [2024-11-15 11:03:54.230528] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.414 [2024-11-15 11:03:54.230532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:22:05.414 [2024-11-15 11:03:54.230537] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230544] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.414 [2024-11-15 11:03:54.230567] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.414 [2024-11-15 11:03:54.230572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:22:05.414 [2024-11-15 11:03:54.230576] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230583] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.414 [2024-11-15 11:03:54.230607] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.414 [2024-11-15 11:03:54.230611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:22:05.414 [2024-11-15 11:03:54.230616] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230623] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.414 [2024-11-15 11:03:54.230646] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.414 [2024-11-15 11:03:54.230651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:22:05.414 [2024-11-15 11:03:54.230655] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230662] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.414 [2024-11-15 11:03:54.230688] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.414 [2024-11-15 11:03:54.230692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:22:05.414 [2024-11-15 11:03:54.230696] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230703] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.414 [2024-11-15 11:03:54.230730] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.414 [2024-11-15 11:03:54.230735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:22:05.414 [2024-11-15 11:03:54.230739] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230746] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.414 [2024-11-15 11:03:54.230773] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.414 [2024-11-15 11:03:54.230778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:22:05.414 [2024-11-15 11:03:54.230782] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230789] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.414 [2024-11-15 11:03:54.230815] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.414 [2024-11-15 11:03:54.230819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:22:05.414 [2024-11-15 11:03:54.230824] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230831] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.414 [2024-11-15 11:03:54.230854] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.414 [2024-11-15 11:03:54.230858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:22:05.414 [2024-11-15 11:03:54.230863] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230870] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.414 [2024-11-15 11:03:54.230891] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.414 [2024-11-15 11:03:54.230895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:22:05.414 [2024-11-15 11:03:54.230899] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230907] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.414 [2024-11-15 11:03:54.230934] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.414 [2024-11-15 11:03:54.230938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:22:05.414 [2024-11-15 11:03:54.230943] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230950] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.414 [2024-11-15 11:03:54.230973] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.414 [2024-11-15 11:03:54.230978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:22:05.414 [2024-11-15 11:03:54.230982] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230989] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.414 [2024-11-15 11:03:54.230995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.415 [2024-11-15 11:03:54.231016] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.415 [2024-11-15 11:03:54.231021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:22:05.415 [2024-11-15 11:03:54.231025] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x189200 00:22:05.415 [2024-11-15 11:03:54.231032] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.415 [2024-11-15 11:03:54.231038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.415 [2024-11-15 11:03:54.231056] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.415 [2024-11-15 11:03:54.231060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:22:05.415 [2024-11-15 11:03:54.231065] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x189200 00:22:05.415 [2024-11-15 11:03:54.231072] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.415 [2024-11-15 11:03:54.231078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.415 [2024-11-15 11:03:54.231095] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.415 [2024-11-15 11:03:54.231100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:22:05.415 [2024-11-15 11:03:54.231104] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x189200 00:22:05.415 [2024-11-15 11:03:54.231111] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.415 [2024-11-15 11:03:54.231117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.415 [2024-11-15 11:03:54.231133] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.415 [2024-11-15 11:03:54.231138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:22:05.415 [2024-11-15 11:03:54.231142] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x189200 00:22:05.415 [2024-11-15 11:03:54.231150] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.415 [2024-11-15 11:03:54.231155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.415 [2024-11-15 11:03:54.235168] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.415 [2024-11-15 11:03:54.235173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:22:05.415 [2024-11-15 11:03:54.235178] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x189200 00:22:05.415 [2024-11-15 11:03:54.235185] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.415 [2024-11-15 11:03:54.235191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.415 [2024-11-15 11:03:54.235212] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.415 [2024-11-15 11:03:54.235216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0018 p:0 m:0 dnr:0 00:22:05.415 [2024-11-15 11:03:54.235221] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x189200 00:22:05.415 [2024-11-15 11:03:54.235226] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:22:05.415 128 00:22:05.415 Transport Service Identifier: 4420 00:22:05.415 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:05.415 Transport Address: 192.168.100.8 00:22:05.415 Transport Specific Address Subtype - RDMA 00:22:05.415 RDMA QP Service Type: 1 (Reliable Connected) 00:22:05.415 RDMA Provider Type: 1 (No provider specified) 00:22:05.415 RDMA CM Service: 1 (RDMA_CM) 00:22:05.415 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:05.678 [2024-11-15 11:03:54.306212] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:22:05.678 [2024-11-15 11:03:54.306252] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507227 ] 00:22:05.678 [2024-11-15 11:03:54.363333] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:05.678 [2024-11-15 11:03:54.363401] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:22:05.678 [2024-11-15 11:03:54.363417] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:22:05.678 [2024-11-15 11:03:54.363421] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:22:05.678 [2024-11-15 11:03:54.363446] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:05.678 [2024-11-15 11:03:54.374882] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:22:05.678 [2024-11-15 11:03:54.385376] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:22:05.678 [2024-11-15 11:03:54.385387] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:22:05.678 [2024-11-15 11:03:54.385393] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385398] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385403] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385407] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385414] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385419] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385424] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385428] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385432] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385437] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385442] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385448] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385452] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385458] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385464] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385468] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385472] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385477] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385481] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385489] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385493] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385498] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385502] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385507] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385511] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385516] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385521] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385526] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385531] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385540] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385545] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385549] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:22:05.678 [2024-11-15 11:03:54.385554] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:22:05.678 [2024-11-15 11:03:54.385557] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:22:05.678 [2024-11-15 11:03:54.385571] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.385582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x189200 00:22:05.678 [2024-11-15 11:03:54.390171] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.678 [2024-11-15 11:03:54.390180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:22:05.678 [2024-11-15 11:03:54.390187] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.390192] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:05.678 [2024-11-15 11:03:54.390198] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:05.678 [2024-11-15 11:03:54.390203] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:05.678 [2024-11-15 11:03:54.390213] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.390220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.678 [2024-11-15 11:03:54.390237] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.678 [2024-11-15 11:03:54.390242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:22:05.678 [2024-11-15 11:03:54.390247] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:05.678 [2024-11-15 11:03:54.390251] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.390256] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:05.678 [2024-11-15 11:03:54.390263] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.678 [2024-11-15 11:03:54.390269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.679 [2024-11-15 11:03:54.390287] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.679 [2024-11-15 11:03:54.390291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:22:05.679 [2024-11-15 11:03:54.390296] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:05.679 [2024-11-15 11:03:54.390300] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x189200 00:22:05.679 [2024-11-15 11:03:54.390306] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:05.679 [2024-11-15 11:03:54.390312] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.679 [2024-11-15 11:03:54.390318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.679 [2024-11-15 11:03:54.390332] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.679 [2024-11-15 11:03:54.390339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:05.679 [2024-11-15 11:03:54.390344] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:05.679 [2024-11-15 11:03:54.390348] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x189200 00:22:05.679 [2024-11-15 11:03:54.390355] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.679 [2024-11-15 11:03:54.390361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.679 [2024-11-15 11:03:54.390376] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.679 [2024-11-15 11:03:54.390380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:05.679 [2024-11-15 11:03:54.390385] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:05.679 [2024-11-15 11:03:54.390389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:05.679 [2024-11-15 11:03:54.390393] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x189200 00:22:05.679 [2024-11-15 11:03:54.390398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:05.679 [2024-11-15 11:03:54.390505] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:05.679 [2024-11-15 11:03:54.390510] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:05.679 [2024-11-15 11:03:54.390516] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.679 [2024-11-15 11:03:54.390522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.679 [2024-11-15 11:03:54.390539] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.679 [2024-11-15 11:03:54.390543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:05.679 [2024-11-15 11:03:54.390548] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:05.679 [2024-11-15 11:03:54.390552] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x189200 00:22:05.679 [2024-11-15 11:03:54.390559] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.679 [2024-11-15 11:03:54.390565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.679 [2024-11-15 11:03:54.390585] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.679 [2024-11-15 11:03:54.390589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:22:05.679 [2024-11-15 11:03:54.390593] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:05.679 [2024-11-15 11:03:54.390598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:05.679 [2024-11-15 11:03:54.390602] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x189200 00:22:05.679 [2024-11-15 11:03:54.390607] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:05.679 [2024-11-15 11:03:54.390615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:05.679 [2024-11-15 11:03:54.390623] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.679 [2024-11-15 11:03:54.390629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x189200 00:22:05.679 [2024-11-15 11:03:54.390670] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.679 [2024-11-15 11:03:54.390675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:05.679 [2024-11-15 11:03:54.390682] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:05.679 [2024-11-15 11:03:54.390686] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:05.679 [2024-11-15 11:03:54.390690] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:05.679 [2024-11-15 11:03:54.390694] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:05.679 [2024-11-15 11:03:54.390699] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:05.679 [2024-11-15 11:03:54.390703] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:05.679 [2024-11-15 11:03:54.390707] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x189200 00:22:05.679 [2024-11-15 11:03:54.390714] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:05.679 [2024-11-15 11:03:54.390721] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.679 [2024-11-15 11:03:54.390727] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.679 [2024-11-15 11:03:54.390750] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.679 [2024-11-15 11:03:54.390754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:05.679 [2024-11-15 11:03:54.390761] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x189200 00:22:05.679 [2024-11-15 11:03:54.390766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.679 [2024-11-15 11:03:54.390771] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x189200 00:22:05.679 [2024-11-15 11:03:54.390776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.679 [2024-11-15 11:03:54.390782] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.679 [2024-11-15 11:03:54.390787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.679 [2024-11-15 11:03:54.390792] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x189200 00:22:05.679 [2024-11-15 11:03:54.390797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.679 [2024-11-15 11:03:54.390802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:05.679 [2024-11-15 11:03:54.390806] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x189200 00:22:05.679 [2024-11-15 11:03:54.390815] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:05.679 [2024-11-15 11:03:54.390822] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.679 [2024-11-15 11:03:54.390827] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.679 [2024-11-15 11:03:54.390844] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.679 [2024-11-15 11:03:54.390849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:22:05.679 [2024-11-15 11:03:54.390853] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:05.679 [2024-11-15 11:03:54.390857] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:05.679 [2024-11-15 11:03:54.390862] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x189200 00:22:05.679 [2024-11-15 11:03:54.390868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:05.679 [2024-11-15 11:03:54.390875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:05.679 [2024-11-15 11:03:54.390881] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.679 [2024-11-15 11:03:54.390887] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.679 [2024-11-15 11:03:54.390908] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.679 [2024-11-15 11:03:54.390913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:22:05.679 [2024-11-15 11:03:54.390964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:05.679 [2024-11-15 11:03:54.390969] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x189200 00:22:05.679 [2024-11-15 11:03:54.390975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:05.679 [2024-11-15 11:03:54.390982] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.679 [2024-11-15 11:03:54.390988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x189200 00:22:05.679 [2024-11-15 11:03:54.391013] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.679 [2024-11-15 11:03:54.391017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:05.679 [2024-11-15 11:03:54.391030] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:05.680 [2024-11-15 11:03:54.391041] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:05.680 [2024-11-15 11:03:54.391046] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x189200 00:22:05.680 [2024-11-15 11:03:54.391052] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:05.680 [2024-11-15 11:03:54.391058] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.680 [2024-11-15 11:03:54.391064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x189200 00:22:05.680 [2024-11-15 11:03:54.391104] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.680 [2024-11-15 11:03:54.391108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:05.680 [2024-11-15 11:03:54.391121] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:05.680 [2024-11-15 11:03:54.391125] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x189200 00:22:05.680 [2024-11-15 11:03:54.391131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:05.680 [2024-11-15 11:03:54.391138] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.680 [2024-11-15 11:03:54.391143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x189200 00:22:05.680 [2024-11-15 11:03:54.391172] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.680 [2024-11-15 11:03:54.391177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:05.680 [2024-11-15 11:03:54.391185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:05.680 [2024-11-15 11:03:54.391189] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x189200 00:22:05.680 [2024-11-15 11:03:54.391195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:05.680 [2024-11-15 11:03:54.391202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:05.680 [2024-11-15 11:03:54.391208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:05.680 [2024-11-15 11:03:54.391212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:05.680 [2024-11-15 11:03:54.391217] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:05.680 [2024-11-15 11:03:54.391222] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:05.680 [2024-11-15 11:03:54.391226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:05.680 [2024-11-15 11:03:54.391231] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:05.680 [2024-11-15 11:03:54.391243] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.680 [2024-11-15 11:03:54.391249] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.680 [2024-11-15 11:03:54.391255] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x189200 00:22:05.680 [2024-11-15 11:03:54.391261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.680 [2024-11-15 11:03:54.391269] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.680 [2024-11-15 11:03:54.391274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:05.680 [2024-11-15 11:03:54.391279] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x189200 00:22:05.680 [2024-11-15 11:03:54.391287] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.680 [2024-11-15 11:03:54.391293] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:0 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.680 [2024-11-15 11:03:54.391299] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.680 [2024-11-15 11:03:54.391303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:05.680 [2024-11-15 11:03:54.391308] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x189200 00:22:05.680 [2024-11-15 11:03:54.391312] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.680 [2024-11-15 11:03:54.391316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:05.680 [2024-11-15 11:03:54.391321] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x189200 00:22:05.680 [2024-11-15 11:03:54.391328] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.680 [2024-11-15 11:03:54.391334] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:0 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.680 [2024-11-15 11:03:54.391357] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.680 [2024-11-15 11:03:54.391361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:05.680 [2024-11-15 11:03:54.391365] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x189200 00:22:05.680 [2024-11-15 11:03:54.391372] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.680 [2024-11-15 11:03:54.391378] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.680 [2024-11-15 11:03:54.391399] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.680 [2024-11-15 11:03:54.391403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:22:05.680 [2024-11-15 11:03:54.391408] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x189200 00:22:05.680 [2024-11-15 11:03:54.391419] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x189200 00:22:05.680 [2024-11-15 11:03:54.391425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x189200 00:22:05.680 [2024-11-15 11:03:54.391433] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x189200 00:22:05.680 [2024-11-15 11:03:54.391438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x189200 00:22:05.680 [2024-11-15 11:03:54.391445] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x189200 00:22:05.680 [2024-11-15 11:03:54.391451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x189200 00:22:05.680 [2024-11-15 11:03:54.391458] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x189200 00:22:05.680 [2024-11-15 11:03:54.391463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x189200 00:22:05.680 [2024-11-15 11:03:54.391470] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.680 [2024-11-15 11:03:54.391476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:05.680 [2024-11-15 11:03:54.391485] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x189200 00:22:05.680 [2024-11-15 11:03:54.391493] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.680 [2024-11-15 11:03:54.391497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:05.680 [2024-11-15 11:03:54.391505] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x189200 00:22:05.680 [2024-11-15 11:03:54.391510] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.680 [2024-11-15 11:03:54.391514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:05.680 [2024-11-15 11:03:54.391519] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x189200 00:22:05.680 [2024-11-15 11:03:54.391532] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.680 [2024-11-15 11:03:54.391537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:05.680 [2024-11-15 11:03:54.391544] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x189200 00:22:05.680 ===================================================== 00:22:05.680 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:05.680 ===================================================== 00:22:05.680 Controller Capabilities/Features 00:22:05.680 ================================ 00:22:05.680 Vendor ID: 8086 00:22:05.680 Subsystem Vendor ID: 8086 00:22:05.680 Serial Number: SPDK00000000000001 00:22:05.680 Model Number: SPDK bdev Controller 00:22:05.680 Firmware Version: 25.01 00:22:05.680 Recommended Arb Burst: 6 00:22:05.680 IEEE OUI Identifier: e4 d2 5c 00:22:05.680 Multi-path I/O 00:22:05.680 May have multiple subsystem ports: Yes 00:22:05.680 May have multiple controllers: Yes 00:22:05.680 Associated with SR-IOV VF: No 00:22:05.680 Max Data Transfer Size: 131072 00:22:05.680 Max Number of Namespaces: 32 00:22:05.680 Max Number of I/O Queues: 127 00:22:05.680 NVMe Specification Version (VS): 1.3 00:22:05.680 NVMe Specification Version (Identify): 1.3 00:22:05.680 Maximum Queue Entries: 128 00:22:05.680 Contiguous Queues Required: Yes 00:22:05.680 Arbitration Mechanisms Supported 00:22:05.680 Weighted Round Robin: Not Supported 00:22:05.680 Vendor Specific: Not Supported 00:22:05.680 Reset Timeout: 15000 ms 00:22:05.680 Doorbell Stride: 4 bytes 00:22:05.680 NVM Subsystem Reset: Not Supported 00:22:05.680 Command Sets Supported 00:22:05.680 NVM Command Set: Supported 00:22:05.680 Boot Partition: Not Supported 00:22:05.680 Memory Page Size Minimum: 4096 bytes 00:22:05.680 Memory Page Size Maximum: 4096 bytes 00:22:05.680 Persistent Memory Region: Not Supported 00:22:05.681 Optional Asynchronous Events Supported 00:22:05.681 Namespace Attribute Notices: Supported 00:22:05.681 Firmware Activation Notices: Not Supported 00:22:05.681 ANA Change Notices: Not Supported 00:22:05.681 PLE Aggregate Log Change Notices: Not Supported 00:22:05.681 LBA Status Info Alert Notices: Not Supported 00:22:05.681 EGE Aggregate Log Change Notices: Not Supported 00:22:05.681 Normal NVM Subsystem Shutdown event: Not Supported 00:22:05.681 Zone Descriptor Change Notices: Not Supported 00:22:05.681 Discovery Log Change Notices: Not Supported 00:22:05.681 Controller Attributes 00:22:05.681 128-bit Host Identifier: Supported 00:22:05.681 Non-Operational Permissive Mode: Not Supported 00:22:05.681 NVM Sets: Not Supported 00:22:05.681 Read Recovery Levels: Not Supported 00:22:05.681 Endurance Groups: Not Supported 00:22:05.681 Predictable Latency Mode: Not Supported 00:22:05.681 Traffic Based Keep ALive: Not Supported 00:22:05.681 Namespace Granularity: Not Supported 00:22:05.681 SQ Associations: Not Supported 00:22:05.681 UUID List: Not Supported 00:22:05.681 Multi-Domain Subsystem: Not Supported 00:22:05.681 Fixed Capacity Management: Not Supported 00:22:05.681 Variable Capacity Management: Not Supported 00:22:05.681 Delete Endurance Group: Not Supported 00:22:05.681 Delete NVM Set: Not Supported 00:22:05.681 Extended LBA Formats Supported: Not Supported 00:22:05.681 Flexible Data Placement Supported: Not Supported 00:22:05.681 00:22:05.681 Controller Memory Buffer Support 00:22:05.681 ================================ 00:22:05.681 Supported: No 00:22:05.681 00:22:05.681 Persistent Memory Region Support 00:22:05.681 ================================ 00:22:05.681 Supported: No 00:22:05.681 00:22:05.681 Admin Command Set Attributes 00:22:05.681 ============================ 00:22:05.681 Security Send/Receive: Not Supported 00:22:05.681 Format NVM: Not Supported 00:22:05.681 Firmware Activate/Download: Not Supported 00:22:05.681 Namespace Management: Not Supported 00:22:05.681 Device Self-Test: Not Supported 00:22:05.681 Directives: Not Supported 00:22:05.681 NVMe-MI: Not Supported 00:22:05.681 Virtualization Management: Not Supported 00:22:05.681 Doorbell Buffer Config: Not Supported 00:22:05.681 Get LBA Status Capability: Not Supported 00:22:05.681 Command & Feature Lockdown Capability: Not Supported 00:22:05.681 Abort Command Limit: 4 00:22:05.681 Async Event Request Limit: 4 00:22:05.681 Number of Firmware Slots: N/A 00:22:05.681 Firmware Slot 1 Read-Only: N/A 00:22:05.681 Firmware Activation Without Reset: N/A 00:22:05.681 Multiple Update Detection Support: N/A 00:22:05.681 Firmware Update Granularity: No Information Provided 00:22:05.681 Per-Namespace SMART Log: No 00:22:05.681 Asymmetric Namespace Access Log Page: Not Supported 00:22:05.681 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:05.681 Command Effects Log Page: Supported 00:22:05.681 Get Log Page Extended Data: Supported 00:22:05.681 Telemetry Log Pages: Not Supported 00:22:05.681 Persistent Event Log Pages: Not Supported 00:22:05.681 Supported Log Pages Log Page: May Support 00:22:05.681 Commands Supported & Effects Log Page: Not Supported 00:22:05.681 Feature Identifiers & Effects Log Page:May Support 00:22:05.681 NVMe-MI Commands & Effects Log Page: May Support 00:22:05.681 Data Area 4 for Telemetry Log: Not Supported 00:22:05.681 Error Log Page Entries Supported: 128 00:22:05.681 Keep Alive: Supported 00:22:05.681 Keep Alive Granularity: 10000 ms 00:22:05.681 00:22:05.681 NVM Command Set Attributes 00:22:05.681 ========================== 00:22:05.681 Submission Queue Entry Size 00:22:05.681 Max: 64 00:22:05.681 Min: 64 00:22:05.681 Completion Queue Entry Size 00:22:05.681 Max: 16 00:22:05.681 Min: 16 00:22:05.681 Number of Namespaces: 32 00:22:05.681 Compare Command: Supported 00:22:05.681 Write Uncorrectable Command: Not Supported 00:22:05.681 Dataset Management Command: Supported 00:22:05.681 Write Zeroes Command: Supported 00:22:05.681 Set Features Save Field: Not Supported 00:22:05.681 Reservations: Supported 00:22:05.681 Timestamp: Not Supported 00:22:05.681 Copy: Supported 00:22:05.681 Volatile Write Cache: Present 00:22:05.681 Atomic Write Unit (Normal): 1 00:22:05.681 Atomic Write Unit (PFail): 1 00:22:05.681 Atomic Compare & Write Unit: 1 00:22:05.681 Fused Compare & Write: Supported 00:22:05.681 Scatter-Gather List 00:22:05.681 SGL Command Set: Supported 00:22:05.681 SGL Keyed: Supported 00:22:05.681 SGL Bit Bucket Descriptor: Not Supported 00:22:05.681 SGL Metadata Pointer: Not Supported 00:22:05.681 Oversized SGL: Not Supported 00:22:05.681 SGL Metadata Address: Not Supported 00:22:05.681 SGL Offset: Supported 00:22:05.681 Transport SGL Data Block: Not Supported 00:22:05.681 Replay Protected Memory Block: Not Supported 00:22:05.681 00:22:05.681 Firmware Slot Information 00:22:05.681 ========================= 00:22:05.681 Active slot: 1 00:22:05.681 Slot 1 Firmware Revision: 25.01 00:22:05.681 00:22:05.681 00:22:05.681 Commands Supported and Effects 00:22:05.681 ============================== 00:22:05.681 Admin Commands 00:22:05.681 -------------- 00:22:05.681 Get Log Page (02h): Supported 00:22:05.681 Identify (06h): Supported 00:22:05.681 Abort (08h): Supported 00:22:05.681 Set Features (09h): Supported 00:22:05.681 Get Features (0Ah): Supported 00:22:05.681 Asynchronous Event Request (0Ch): Supported 00:22:05.681 Keep Alive (18h): Supported 00:22:05.681 I/O Commands 00:22:05.681 ------------ 00:22:05.681 Flush (00h): Supported LBA-Change 00:22:05.681 Write (01h): Supported LBA-Change 00:22:05.681 Read (02h): Supported 00:22:05.681 Compare (05h): Supported 00:22:05.681 Write Zeroes (08h): Supported LBA-Change 00:22:05.681 Dataset Management (09h): Supported LBA-Change 00:22:05.681 Copy (19h): Supported LBA-Change 00:22:05.681 00:22:05.681 Error Log 00:22:05.681 ========= 00:22:05.681 00:22:05.681 Arbitration 00:22:05.681 =========== 00:22:05.681 Arbitration Burst: 1 00:22:05.681 00:22:05.681 Power Management 00:22:05.681 ================ 00:22:05.681 Number of Power States: 1 00:22:05.681 Current Power State: Power State #0 00:22:05.681 Power State #0: 00:22:05.681 Max Power: 0.00 W 00:22:05.681 Non-Operational State: Operational 00:22:05.681 Entry Latency: Not Reported 00:22:05.681 Exit Latency: Not Reported 00:22:05.681 Relative Read Throughput: 0 00:22:05.681 Relative Read Latency: 0 00:22:05.681 Relative Write Throughput: 0 00:22:05.681 Relative Write Latency: 0 00:22:05.681 Idle Power: Not Reported 00:22:05.681 Active Power: Not Reported 00:22:05.681 Non-Operational Permissive Mode: Not Supported 00:22:05.681 00:22:05.681 Health Information 00:22:05.681 ================== 00:22:05.681 Critical Warnings: 00:22:05.681 Available Spare Space: OK 00:22:05.681 Temperature: OK 00:22:05.681 Device Reliability: OK 00:22:05.681 Read Only: No 00:22:05.681 Volatile Memory Backup: OK 00:22:05.681 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:05.681 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:05.681 Available Spare: 0% 00:22:05.681 Available Spare Threshold: 0% 00:22:05.681 Life Percentage [2024-11-15 11:03:54.391619] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x189200 00:22:05.681 [2024-11-15 11:03:54.391627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.681 [2024-11-15 11:03:54.391648] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.681 [2024-11-15 11:03:54.391653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:05.681 [2024-11-15 11:03:54.391657] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x189200 00:22:05.681 [2024-11-15 11:03:54.391679] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:05.681 [2024-11-15 11:03:54.391687] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 18761 doesn't match qid 00:22:05.681 [2024-11-15 11:03:54.391699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32614 cdw0:428b7d0 sqhd:5320 p:0 m:0 dnr:0 00:22:05.681 [2024-11-15 11:03:54.391704] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 18761 doesn't match qid 00:22:05.681 [2024-11-15 11:03:54.391710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32614 cdw0:428b7d0 sqhd:5320 p:0 m:0 dnr:0 00:22:05.681 [2024-11-15 11:03:54.391714] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 18761 doesn't match qid 00:22:05.681 [2024-11-15 11:03:54.391721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32614 cdw0:428b7d0 sqhd:5320 p:0 m:0 dnr:0 00:22:05.681 [2024-11-15 11:03:54.391726] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 18761 doesn't match qid 00:22:05.681 [2024-11-15 11:03:54.391731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32614 cdw0:428b7d0 sqhd:5320 p:0 m:0 dnr:0 00:22:05.681 [2024-11-15 11:03:54.391738] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x189200 00:22:05.681 [2024-11-15 11:03:54.391745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.682 [2024-11-15 11:03:54.391767] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.682 [2024-11-15 11:03:54.391772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:22:05.682 [2024-11-15 11:03:54.391779] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.391786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.682 [2024-11-15 11:03:54.391791] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.391806] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.682 [2024-11-15 11:03:54.391811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:05.682 [2024-11-15 11:03:54.391815] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:05.682 [2024-11-15 11:03:54.391820] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:05.682 [2024-11-15 11:03:54.391824] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.391831] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.391837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.682 [2024-11-15 11:03:54.391855] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.682 [2024-11-15 11:03:54.391860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:22:05.682 [2024-11-15 11:03:54.391865] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.391872] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.391878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.682 [2024-11-15 11:03:54.391900] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.682 [2024-11-15 11:03:54.391904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:22:05.682 [2024-11-15 11:03:54.391909] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.391916] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.391922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.682 [2024-11-15 11:03:54.391940] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.682 [2024-11-15 11:03:54.391945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:22:05.682 [2024-11-15 11:03:54.391949] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.391957] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.391962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.682 [2024-11-15 11:03:54.391980] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.682 [2024-11-15 11:03:54.391985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:22:05.682 [2024-11-15 11:03:54.391990] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.391997] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.392003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.682 [2024-11-15 11:03:54.392021] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.682 [2024-11-15 11:03:54.392026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:22:05.682 [2024-11-15 11:03:54.392031] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.392038] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.392044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.682 [2024-11-15 11:03:54.392063] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.682 [2024-11-15 11:03:54.392067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:22:05.682 [2024-11-15 11:03:54.392072] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.392079] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.392085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.682 [2024-11-15 11:03:54.392103] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.682 [2024-11-15 11:03:54.392107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:22:05.682 [2024-11-15 11:03:54.392112] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.392120] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.392125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.682 [2024-11-15 11:03:54.392143] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.682 [2024-11-15 11:03:54.392148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:22:05.682 [2024-11-15 11:03:54.392153] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.392160] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.392171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.682 [2024-11-15 11:03:54.392191] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.682 [2024-11-15 11:03:54.392195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:22:05.682 [2024-11-15 11:03:54.392200] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.392207] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.392213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.682 [2024-11-15 11:03:54.392230] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.682 [2024-11-15 11:03:54.392234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:22:05.682 [2024-11-15 11:03:54.392239] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.392246] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.392252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.682 [2024-11-15 11:03:54.392272] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.682 [2024-11-15 11:03:54.392276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:22:05.682 [2024-11-15 11:03:54.392281] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.392288] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.392294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.682 [2024-11-15 11:03:54.392313] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.682 [2024-11-15 11:03:54.392317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:22:05.682 [2024-11-15 11:03:54.392322] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.392329] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.682 [2024-11-15 11:03:54.392335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.682 [2024-11-15 11:03:54.392351] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.682 [2024-11-15 11:03:54.392356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:22:05.682 [2024-11-15 11:03:54.392360] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392367] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.683 [2024-11-15 11:03:54.392397] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.683 [2024-11-15 11:03:54.392402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:22:05.683 [2024-11-15 11:03:54.392407] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392414] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.683 [2024-11-15 11:03:54.392439] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.683 [2024-11-15 11:03:54.392443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:22:05.683 [2024-11-15 11:03:54.392448] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392455] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.683 [2024-11-15 11:03:54.392479] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.683 [2024-11-15 11:03:54.392483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:22:05.683 [2024-11-15 11:03:54.392488] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392495] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.683 [2024-11-15 11:03:54.392525] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.683 [2024-11-15 11:03:54.392529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:22:05.683 [2024-11-15 11:03:54.392534] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392541] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.683 [2024-11-15 11:03:54.392561] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.683 [2024-11-15 11:03:54.392565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:22:05.683 [2024-11-15 11:03:54.392570] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392577] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.683 [2024-11-15 11:03:54.392603] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.683 [2024-11-15 11:03:54.392607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:22:05.683 [2024-11-15 11:03:54.392612] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392619] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.683 [2024-11-15 11:03:54.392644] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.683 [2024-11-15 11:03:54.392648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:22:05.683 [2024-11-15 11:03:54.392653] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392660] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.683 [2024-11-15 11:03:54.392687] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.683 [2024-11-15 11:03:54.392691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:22:05.683 [2024-11-15 11:03:54.392696] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392703] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.683 [2024-11-15 11:03:54.392725] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.683 [2024-11-15 11:03:54.392729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:22:05.683 [2024-11-15 11:03:54.392734] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392741] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.683 [2024-11-15 11:03:54.392764] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.683 [2024-11-15 11:03:54.392768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:22:05.683 [2024-11-15 11:03:54.392773] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392780] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.683 [2024-11-15 11:03:54.392802] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.683 [2024-11-15 11:03:54.392806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:22:05.683 [2024-11-15 11:03:54.392811] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392818] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.683 [2024-11-15 11:03:54.392843] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.683 [2024-11-15 11:03:54.392847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:22:05.683 [2024-11-15 11:03:54.392852] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392859] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.683 [2024-11-15 11:03:54.392881] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.683 [2024-11-15 11:03:54.392885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:22:05.683 [2024-11-15 11:03:54.392890] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392897] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.683 [2024-11-15 11:03:54.392922] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.683 [2024-11-15 11:03:54.392927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:22:05.683 [2024-11-15 11:03:54.392931] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392938] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.683 [2024-11-15 11:03:54.392963] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.683 [2024-11-15 11:03:54.392968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:22:05.683 [2024-11-15 11:03:54.392972] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392981] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.392987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.683 [2024-11-15 11:03:54.393004] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.683 [2024-11-15 11:03:54.393009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:22:05.683 [2024-11-15 11:03:54.393013] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.393021] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.393026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.683 [2024-11-15 11:03:54.393047] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.683 [2024-11-15 11:03:54.393052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:22:05.683 [2024-11-15 11:03:54.393056] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.393063] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.683 [2024-11-15 11:03:54.393069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.683 [2024-11-15 11:03:54.393089] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.683 [2024-11-15 11:03:54.393093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:22:05.683 [2024-11-15 11:03:54.393098] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393105] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.684 [2024-11-15 11:03:54.393129] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.684 [2024-11-15 11:03:54.393133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:22:05.684 [2024-11-15 11:03:54.393138] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393145] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.684 [2024-11-15 11:03:54.393173] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.684 [2024-11-15 11:03:54.393178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:22:05.684 [2024-11-15 11:03:54.393182] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393189] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.684 [2024-11-15 11:03:54.393216] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.684 [2024-11-15 11:03:54.393221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:22:05.684 [2024-11-15 11:03:54.393225] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393234] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.684 [2024-11-15 11:03:54.393257] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.684 [2024-11-15 11:03:54.393262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:22:05.684 [2024-11-15 11:03:54.393266] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393273] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.684 [2024-11-15 11:03:54.393297] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.684 [2024-11-15 11:03:54.393301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:22:05.684 [2024-11-15 11:03:54.393306] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393313] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.684 [2024-11-15 11:03:54.393337] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.684 [2024-11-15 11:03:54.393341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:22:05.684 [2024-11-15 11:03:54.393346] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393353] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.684 [2024-11-15 11:03:54.393378] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.684 [2024-11-15 11:03:54.393383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:22:05.684 [2024-11-15 11:03:54.393387] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393395] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.684 [2024-11-15 11:03:54.393421] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.684 [2024-11-15 11:03:54.393426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:22:05.684 [2024-11-15 11:03:54.393430] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393437] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.684 [2024-11-15 11:03:54.393463] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.684 [2024-11-15 11:03:54.393467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:22:05.684 [2024-11-15 11:03:54.393473] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393480] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.684 [2024-11-15 11:03:54.393502] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.684 [2024-11-15 11:03:54.393507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:22:05.684 [2024-11-15 11:03:54.393511] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393519] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.684 [2024-11-15 11:03:54.393541] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.684 [2024-11-15 11:03:54.393545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:22:05.684 [2024-11-15 11:03:54.393549] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393556] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.684 [2024-11-15 11:03:54.393580] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.684 [2024-11-15 11:03:54.393585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:22:05.684 [2024-11-15 11:03:54.393589] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393596] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.684 [2024-11-15 11:03:54.393623] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.684 [2024-11-15 11:03:54.393628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:22:05.684 [2024-11-15 11:03:54.393632] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393639] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.684 [2024-11-15 11:03:54.393663] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.684 [2024-11-15 11:03:54.393667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:22:05.684 [2024-11-15 11:03:54.393672] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393679] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.684 [2024-11-15 11:03:54.393704] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.684 [2024-11-15 11:03:54.393708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:22:05.684 [2024-11-15 11:03:54.393714] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393722] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.684 [2024-11-15 11:03:54.393747] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.684 [2024-11-15 11:03:54.393751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:22:05.684 [2024-11-15 11:03:54.393756] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393763] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.684 [2024-11-15 11:03:54.393790] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.684 [2024-11-15 11:03:54.393794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:22:05.684 [2024-11-15 11:03:54.393799] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393806] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.684 [2024-11-15 11:03:54.393812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.684 [2024-11-15 11:03:54.393833] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.684 [2024-11-15 11:03:54.393837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:22:05.684 [2024-11-15 11:03:54.393842] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x189200 00:22:05.685 [2024-11-15 11:03:54.393849] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.685 [2024-11-15 11:03:54.393855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.685 [2024-11-15 11:03:54.393877] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.685 [2024-11-15 11:03:54.393881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:22:05.685 [2024-11-15 11:03:54.393886] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x189200 00:22:05.685 [2024-11-15 11:03:54.393893] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.685 [2024-11-15 11:03:54.393899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.685 [2024-11-15 11:03:54.393918] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.685 [2024-11-15 11:03:54.393923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:22:05.685 [2024-11-15 11:03:54.393927] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x189200 00:22:05.685 [2024-11-15 11:03:54.393934] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.685 [2024-11-15 11:03:54.393940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.685 [2024-11-15 11:03:54.393956] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.685 [2024-11-15 11:03:54.393962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:22:05.685 [2024-11-15 11:03:54.393967] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x189200 00:22:05.685 [2024-11-15 11:03:54.393974] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.685 [2024-11-15 11:03:54.393980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.685 [2024-11-15 11:03:54.393997] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.685 [2024-11-15 11:03:54.394002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:22:05.685 [2024-11-15 11:03:54.394007] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x189200 00:22:05.685 [2024-11-15 11:03:54.394013] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.685 [2024-11-15 11:03:54.394019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.685 [2024-11-15 11:03:54.394039] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.685 [2024-11-15 11:03:54.394043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:22:05.685 [2024-11-15 11:03:54.394048] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x189200 00:22:05.685 [2024-11-15 11:03:54.394055] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.685 [2024-11-15 11:03:54.394061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.685 [2024-11-15 11:03:54.394077] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.685 [2024-11-15 11:03:54.394081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:22:05.685 [2024-11-15 11:03:54.394086] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x189200 00:22:05.685 [2024-11-15 11:03:54.394093] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.685 [2024-11-15 11:03:54.394099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.685 [2024-11-15 11:03:54.394115] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.685 [2024-11-15 11:03:54.394119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:22:05.685 [2024-11-15 11:03:54.394124] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x189200 00:22:05.685 [2024-11-15 11:03:54.394131] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.685 [2024-11-15 11:03:54.394137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.685 [2024-11-15 11:03:54.394155] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.685 [2024-11-15 11:03:54.394159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:22:05.685 [2024-11-15 11:03:54.398169] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x189200 00:22:05.685 [2024-11-15 11:03:54.398178] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x189200 00:22:05.685 [2024-11-15 11:03:54.398184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:05.685 [2024-11-15 11:03:54.398202] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:05.685 [2024-11-15 11:03:54.398207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0014 p:0 m:0 dnr:0 00:22:05.685 [2024-11-15 11:03:54.398212] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x189200 00:22:05.685 [2024-11-15 11:03:54.398217] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:22:05.685 Used: 0% 00:22:05.685 Data Units Read: 0 00:22:05.685 Data Units Written: 0 00:22:05.685 Host Read Commands: 0 00:22:05.685 Host Write Commands: 0 00:22:05.685 Controller Busy Time: 0 minutes 00:22:05.685 Power Cycles: 0 00:22:05.685 Power On Hours: 0 hours 00:22:05.685 Unsafe Shutdowns: 0 00:22:05.685 Unrecoverable Media Errors: 0 00:22:05.685 Lifetime Error Log Entries: 0 00:22:05.685 Warning Temperature Time: 0 minutes 00:22:05.685 Critical Temperature Time: 0 minutes 00:22:05.685 00:22:05.685 Number of Queues 00:22:05.685 ================ 00:22:05.685 Number of I/O Submission Queues: 127 00:22:05.685 Number of I/O Completion Queues: 127 00:22:05.685 00:22:05.685 Active Namespaces 00:22:05.685 ================= 00:22:05.685 Namespace ID:1 00:22:05.685 Error Recovery Timeout: Unlimited 00:22:05.685 Command Set Identifier: NVM (00h) 00:22:05.685 Deallocate: Supported 00:22:05.685 Deallocated/Unwritten Error: Not Supported 00:22:05.685 Deallocated Read Value: Unknown 00:22:05.685 Deallocate in Write Zeroes: Not Supported 00:22:05.685 Deallocated Guard Field: 0xFFFF 00:22:05.685 Flush: Supported 00:22:05.685 Reservation: Supported 00:22:05.685 Namespace Sharing Capabilities: Multiple Controllers 00:22:05.685 Size (in LBAs): 131072 (0GiB) 00:22:05.685 Capacity (in LBAs): 131072 (0GiB) 00:22:05.685 Utilization (in LBAs): 131072 (0GiB) 00:22:05.685 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:05.685 EUI64: ABCDEF0123456789 00:22:05.685 UUID: 69a14679-9d8e-41d3-bc15-0e53f4699852 00:22:05.685 Thin Provisioning: Not Supported 00:22:05.685 Per-NS Atomic Units: Yes 00:22:05.685 Atomic Boundary Size (Normal): 0 00:22:05.685 Atomic Boundary Size (PFail): 0 00:22:05.685 Atomic Boundary Offset: 0 00:22:05.685 Maximum Single Source Range Length: 65535 00:22:05.685 Maximum Copy Length: 65535 00:22:05.685 Maximum Source Range Count: 1 00:22:05.685 NGUID/EUI64 Never Reused: No 00:22:05.685 Namespace Write Protected: No 00:22:05.685 Number of LBA Formats: 1 00:22:05.685 Current LBA Format: LBA Format #00 00:22:05.685 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:05.685 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:05.685 rmmod nvme_rdma 00:22:05.685 rmmod nvme_fabrics 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1506984 ']' 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1506984 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 1506984 ']' 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 1506984 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1506984 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:05.685 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1506984' 00:22:05.685 killing process with pid 1506984 00:22:05.686 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 1506984 00:22:05.686 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 1506984 00:22:05.956 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:05.957 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:22:05.957 00:22:05.957 real 0m6.868s 00:22:05.957 user 0m5.908s 00:22:05.957 sys 0m4.452s 00:22:05.957 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:05.957 11:03:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.957 ************************************ 00:22:05.957 END TEST nvmf_identify 00:22:05.957 ************************************ 00:22:06.216 11:03:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:22:06.216 11:03:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:06.216 11:03:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:06.216 11:03:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.216 ************************************ 00:22:06.216 START TEST nvmf_perf 00:22:06.216 ************************************ 00:22:06.216 11:03:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:22:06.216 * Looking for test storage... 00:22:06.216 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:06.216 11:03:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:06.216 11:03:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:22:06.216 11:03:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:06.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.216 --rc genhtml_branch_coverage=1 00:22:06.216 --rc genhtml_function_coverage=1 00:22:06.216 --rc genhtml_legend=1 00:22:06.216 --rc geninfo_all_blocks=1 00:22:06.216 --rc geninfo_unexecuted_blocks=1 00:22:06.216 00:22:06.216 ' 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:06.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.216 --rc genhtml_branch_coverage=1 00:22:06.216 --rc genhtml_function_coverage=1 00:22:06.216 --rc genhtml_legend=1 00:22:06.216 --rc geninfo_all_blocks=1 00:22:06.216 --rc geninfo_unexecuted_blocks=1 00:22:06.216 00:22:06.216 ' 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:06.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.216 --rc genhtml_branch_coverage=1 00:22:06.216 --rc genhtml_function_coverage=1 00:22:06.216 --rc genhtml_legend=1 00:22:06.216 --rc geninfo_all_blocks=1 00:22:06.216 --rc geninfo_unexecuted_blocks=1 00:22:06.216 00:22:06.216 ' 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:06.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.216 --rc genhtml_branch_coverage=1 00:22:06.216 --rc genhtml_function_coverage=1 00:22:06.216 --rc genhtml_legend=1 00:22:06.216 --rc geninfo_all_blocks=1 00:22:06.216 --rc geninfo_unexecuted_blocks=1 00:22:06.216 00:22:06.216 ' 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.216 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:06.217 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:06.217 11:03:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:12.780 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:22:12.781 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:22:12.781 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:22:12.781 Found net devices under 0000:af:00.0: mlx_0_0 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:22:12.781 Found net devices under 0000:af:00.1: mlx_0_1 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # rdma_device_init 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:12.781 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:12.781 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:22:12.781 altname enp175s0f0np0 00:22:12.781 altname ens801f0np0 00:22:12.781 inet 192.168.100.8/24 scope global mlx_0_0 00:22:12.781 valid_lft forever preferred_lft forever 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:12.781 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:12.781 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:22:12.781 altname enp175s0f1np1 00:22:12.781 altname ens801f1np1 00:22:12.781 inet 192.168.100.9/24 scope global mlx_0_1 00:22:12.781 valid_lft forever preferred_lft forever 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:12.781 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:22:12.782 192.168.100.9' 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:22:12.782 192.168.100.9' 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # head -n 1 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:22:12.782 192.168.100.9' 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # tail -n +2 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # head -n 1 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1510363 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1510363 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 1510363 ']' 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:12.782 11:04:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:12.782 [2024-11-15 11:04:00.845864] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:22:12.782 [2024-11-15 11:04:00.845911] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.782 [2024-11-15 11:04:00.907839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:12.782 [2024-11-15 11:04:00.951488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.782 [2024-11-15 11:04:00.951526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.782 [2024-11-15 11:04:00.951534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.782 [2024-11-15 11:04:00.951540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.782 [2024-11-15 11:04:00.951550] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.782 [2024-11-15 11:04:00.953190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.782 [2024-11-15 11:04:00.953288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.782 [2024-11-15 11:04:00.953392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:12.782 [2024-11-15 11:04:00.953393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.782 11:04:01 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:12.782 11:04:01 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:22:12.782 11:04:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:12.782 11:04:01 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:12.782 11:04:01 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:12.782 11:04:01 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.782 11:04:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:12.782 11:04:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:15.314 11:04:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:15.314 11:04:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:15.573 11:04:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:15.573 11:04:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:15.831 11:04:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:15.831 11:04:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:15.831 11:04:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:15.831 11:04:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:22:15.831 11:04:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:22:16.088 [2024-11-15 11:04:04.738206] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:22:16.088 [2024-11-15 11:04:04.758003] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18daad0/0x17b0720) succeed. 00:22:16.088 [2024-11-15 11:04:04.767540] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18dbf80/0x1830400) succeed. 00:22:16.088 11:04:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:16.347 11:04:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:16.347 11:04:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:16.605 11:04:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:16.605 11:04:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:16.863 11:04:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:16.863 [2024-11-15 11:04:05.674748] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:16.863 11:04:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:22:17.121 11:04:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:17.121 11:04:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:17.121 11:04:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:17.121 11:04:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:18.497 Initializing NVMe Controllers 00:22:18.497 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:18.497 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:18.497 Initialization complete. Launching workers. 00:22:18.497 ======================================================== 00:22:18.497 Latency(us) 00:22:18.497 Device Information : IOPS MiB/s Average min max 00:22:18.497 PCIE (0000:5e:00.0) NSID 1 from core 0: 97089.49 379.26 329.05 24.35 4473.41 00:22:18.497 ======================================================== 00:22:18.497 Total : 97089.49 379.26 329.05 24.35 4473.41 00:22:18.497 00:22:18.497 11:04:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:21.782 Initializing NVMe Controllers 00:22:21.782 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:21.782 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:21.782 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:21.782 Initialization complete. Launching workers. 00:22:21.782 ======================================================== 00:22:21.782 Latency(us) 00:22:21.782 Device Information : IOPS MiB/s Average min max 00:22:21.782 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6442.99 25.17 154.85 49.66 4092.49 00:22:21.782 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5079.99 19.84 196.45 71.33 4155.53 00:22:21.782 ======================================================== 00:22:21.782 Total : 11522.98 45.01 173.19 49.66 4155.53 00:22:21.782 00:22:21.782 11:04:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:25.063 Initializing NVMe Controllers 00:22:25.063 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:25.063 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:25.063 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:25.063 Initialization complete. Launching workers. 00:22:25.063 ======================================================== 00:22:25.063 Latency(us) 00:22:25.063 Device Information : IOPS MiB/s Average min max 00:22:25.064 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17533.46 68.49 1822.98 522.81 9468.37 00:22:25.064 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3901.88 15.24 8193.19 5070.24 15896.24 00:22:25.064 ======================================================== 00:22:25.064 Total : 21435.34 83.73 2982.55 522.81 15896.24 00:22:25.064 00:22:25.322 11:04:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:22:25.322 11:04:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:29.513 Initializing NVMe Controllers 00:22:29.513 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:29.513 Controller IO queue size 128, less than required. 00:22:29.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:29.513 Controller IO queue size 128, less than required. 00:22:29.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:29.513 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:29.513 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:29.513 Initialization complete. Launching workers. 00:22:29.513 ======================================================== 00:22:29.513 Latency(us) 00:22:29.513 Device Information : IOPS MiB/s Average min max 00:22:29.513 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3781.50 945.37 34109.71 15400.82 88457.75 00:22:29.513 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3939.00 984.75 32062.52 15214.69 53058.42 00:22:29.513 ======================================================== 00:22:29.513 Total : 7720.50 1930.12 33065.23 15214.69 88457.75 00:22:29.513 00:22:29.513 11:04:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:22:30.079 No valid NVMe controllers or AIO or URING devices found 00:22:30.079 Initializing NVMe Controllers 00:22:30.079 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:30.079 Controller IO queue size 128, less than required. 00:22:30.079 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:30.079 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:30.079 Controller IO queue size 128, less than required. 00:22:30.079 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:30.079 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:30.079 WARNING: Some requested NVMe devices were skipped 00:22:30.079 11:04:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:22:34.264 Initializing NVMe Controllers 00:22:34.264 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:34.264 Controller IO queue size 128, less than required. 00:22:34.264 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:34.264 Controller IO queue size 128, less than required. 00:22:34.264 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:34.264 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:34.264 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:34.264 Initialization complete. Launching workers. 00:22:34.264 00:22:34.264 ==================== 00:22:34.264 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:34.264 RDMA transport: 00:22:34.264 dev name: mlx5_0 00:22:34.264 polls: 380811 00:22:34.264 idle_polls: 377921 00:22:34.264 completions: 42162 00:22:34.264 queued_requests: 1 00:22:34.264 total_send_wrs: 21081 00:22:34.264 send_doorbell_updates: 2637 00:22:34.264 total_recv_wrs: 21208 00:22:34.264 recv_doorbell_updates: 2639 00:22:34.264 --------------------------------- 00:22:34.264 00:22:34.264 ==================== 00:22:34.264 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:34.264 RDMA transport: 00:22:34.264 dev name: mlx5_0 00:22:34.264 polls: 383430 00:22:34.264 idle_polls: 383163 00:22:34.264 completions: 19702 00:22:34.264 queued_requests: 1 00:22:34.264 total_send_wrs: 9851 00:22:34.264 send_doorbell_updates: 251 00:22:34.264 total_recv_wrs: 9978 00:22:34.265 recv_doorbell_updates: 252 00:22:34.265 --------------------------------- 00:22:34.265 ======================================================== 00:22:34.265 Latency(us) 00:22:34.265 Device Information : IOPS MiB/s Average min max 00:22:34.265 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5270.00 1317.50 24366.71 10414.70 75729.79 00:22:34.265 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2462.50 615.62 51769.04 28797.87 81281.11 00:22:34.265 ======================================================== 00:22:34.265 Total : 7732.50 1933.12 33093.28 10414.70 81281.11 00:22:34.265 00:22:34.523 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:34.523 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:34.523 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:34.523 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:34.523 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:34.523 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:34.523 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:34.523 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:34.523 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:34.523 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:34.523 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:34.523 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:34.523 rmmod nvme_rdma 00:22:34.781 rmmod nvme_fabrics 00:22:34.781 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:34.781 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:34.781 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:34.781 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1510363 ']' 00:22:34.781 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1510363 00:22:34.781 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 1510363 ']' 00:22:34.781 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 1510363 00:22:34.781 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:22:34.781 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:34.781 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1510363 00:22:34.781 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:34.781 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:34.781 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1510363' 00:22:34.781 killing process with pid 1510363 00:22:34.781 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 1510363 00:22:34.781 11:04:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 1510363 00:22:36.683 11:04:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:36.683 11:04:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:22:36.683 00:22:36.683 real 0m30.175s 00:22:36.683 user 1m38.361s 00:22:36.683 sys 0m5.650s 00:22:36.683 11:04:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:36.683 11:04:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:36.683 ************************************ 00:22:36.683 END TEST nvmf_perf 00:22:36.683 ************************************ 00:22:36.683 11:04:25 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:22:36.683 11:04:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:36.683 11:04:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:36.683 11:04:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.683 ************************************ 00:22:36.683 START TEST nvmf_fio_host 00:22:36.683 ************************************ 00:22:36.683 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:22:36.683 * Looking for test storage... 00:22:36.683 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:36.683 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:36.683 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:22:36.683 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:36.683 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:36.683 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:36.683 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:36.683 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:36.683 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:36.683 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:36.683 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:36.683 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:36.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.684 --rc genhtml_branch_coverage=1 00:22:36.684 --rc genhtml_function_coverage=1 00:22:36.684 --rc genhtml_legend=1 00:22:36.684 --rc geninfo_all_blocks=1 00:22:36.684 --rc geninfo_unexecuted_blocks=1 00:22:36.684 00:22:36.684 ' 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:36.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.684 --rc genhtml_branch_coverage=1 00:22:36.684 --rc genhtml_function_coverage=1 00:22:36.684 --rc genhtml_legend=1 00:22:36.684 --rc geninfo_all_blocks=1 00:22:36.684 --rc geninfo_unexecuted_blocks=1 00:22:36.684 00:22:36.684 ' 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:36.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.684 --rc genhtml_branch_coverage=1 00:22:36.684 --rc genhtml_function_coverage=1 00:22:36.684 --rc genhtml_legend=1 00:22:36.684 --rc geninfo_all_blocks=1 00:22:36.684 --rc geninfo_unexecuted_blocks=1 00:22:36.684 00:22:36.684 ' 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:36.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.684 --rc genhtml_branch_coverage=1 00:22:36.684 --rc genhtml_function_coverage=1 00:22:36.684 --rc genhtml_legend=1 00:22:36.684 --rc geninfo_all_blocks=1 00:22:36.684 --rc geninfo_unexecuted_blocks=1 00:22:36.684 00:22:36.684 ' 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.684 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:36.685 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:36.685 11:04:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:22:41.959 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:22:41.959 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:22:41.959 Found net devices under 0000:af:00.0: mlx_0_0 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:22:41.959 Found net devices under 0000:af:00.1: mlx_0_1 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # rdma_device_init 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:22:41.959 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:41.960 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:41.960 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:22:41.960 altname enp175s0f0np0 00:22:41.960 altname ens801f0np0 00:22:41.960 inet 192.168.100.8/24 scope global mlx_0_0 00:22:41.960 valid_lft forever preferred_lft forever 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:41.960 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:41.960 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:22:41.960 altname enp175s0f1np1 00:22:41.960 altname ens801f1np1 00:22:41.960 inet 192.168.100.9/24 scope global mlx_0_1 00:22:41.960 valid_lft forever preferred_lft forever 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:22:41.960 192.168.100.9' 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:22:41.960 192.168.100.9' 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # head -n 1 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:22:41.960 192.168.100.9' 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # head -n 1 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # tail -n +2 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1517440 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1517440 00:22:41.960 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 1517440 ']' 00:22:41.961 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.961 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:41.961 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.961 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:41.961 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.961 [2024-11-15 11:04:30.635259] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:22:41.961 [2024-11-15 11:04:30.635324] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.961 [2024-11-15 11:04:30.699628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:41.961 [2024-11-15 11:04:30.743270] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.961 [2024-11-15 11:04:30.743310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.961 [2024-11-15 11:04:30.743317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.961 [2024-11-15 11:04:30.743324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.961 [2024-11-15 11:04:30.743329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.961 [2024-11-15 11:04:30.744993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.961 [2024-11-15 11:04:30.745089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:41.961 [2024-11-15 11:04:30.745158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:41.961 [2024-11-15 11:04:30.745160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.221 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:42.221 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:22:42.221 11:04:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:42.221 [2024-11-15 11:04:31.052889] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1042230/0x1046720) succeed. 00:22:42.221 [2024-11-15 11:04:31.062232] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10438c0/0x1087dc0) succeed. 00:22:42.480 11:04:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:42.480 11:04:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:42.480 11:04:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.480 11:04:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:42.738 Malloc1 00:22:42.738 11:04:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:42.997 11:04:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:42.997 11:04:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:43.256 [2024-11-15 11:04:32.016407] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:43.256 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:22:43.515 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:22:43.515 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:22:43.515 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:22:43.515 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:22:43.515 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:43.515 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:22:43.515 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:22:43.515 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:22:43.515 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:22:43.515 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:43.515 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:22:43.515 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:22:43.515 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:43.515 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:43.515 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:43.515 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:43.515 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:22:43.516 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:22:43.516 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:43.516 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:43.516 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:43.516 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:43.516 11:04:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:22:43.775 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:43.775 fio-3.35 00:22:43.775 Starting 1 thread 00:22:46.334 00:22:46.334 test: (groupid=0, jobs=1): err= 0: pid=1518041: Fri Nov 15 11:04:34 2024 00:22:46.334 read: IOPS=17.1k, BW=66.8MiB/s (70.0MB/s)(134MiB/2004msec) 00:22:46.334 slat (nsec): min=1425, max=38114, avg=1555.69, stdev=531.77 00:22:46.334 clat (usec): min=1968, max=6748, avg=3716.70, stdev=104.63 00:22:46.334 lat (usec): min=1990, max=6749, avg=3718.26, stdev=104.56 00:22:46.334 clat percentiles (usec): 00:22:46.334 | 1.00th=[ 3359], 5.00th=[ 3687], 10.00th=[ 3687], 20.00th=[ 3687], 00:22:46.334 | 30.00th=[ 3720], 40.00th=[ 3720], 50.00th=[ 3720], 60.00th=[ 3720], 00:22:46.334 | 70.00th=[ 3720], 80.00th=[ 3720], 90.00th=[ 3752], 95.00th=[ 3752], 00:22:46.334 | 99.00th=[ 4047], 99.50th=[ 4080], 99.90th=[ 5276], 99.95th=[ 6194], 00:22:46.334 | 99.99th=[ 6718] 00:22:46.334 bw ( KiB/s): min=67080, max=69184, per=100.00%, avg=68376.00, stdev=951.59, samples=4 00:22:46.334 iops : min=16770, max=17296, avg=17094.00, stdev=237.90, samples=4 00:22:46.334 write: IOPS=17.1k, BW=66.9MiB/s (70.1MB/s)(134MiB/2004msec); 0 zone resets 00:22:46.334 slat (nsec): min=1457, max=17904, avg=1632.30, stdev=497.60 00:22:46.334 clat (usec): min=2774, max=6754, avg=3714.99, stdev=95.33 00:22:46.334 lat (usec): min=2785, max=6755, avg=3716.63, stdev=95.26 00:22:46.334 clat percentiles (usec): 00:22:46.334 | 1.00th=[ 3359], 5.00th=[ 3687], 10.00th=[ 3687], 20.00th=[ 3687], 00:22:46.334 | 30.00th=[ 3720], 40.00th=[ 3720], 50.00th=[ 3720], 60.00th=[ 3720], 00:22:46.334 | 70.00th=[ 3720], 80.00th=[ 3720], 90.00th=[ 3752], 95.00th=[ 3752], 00:22:46.334 | 99.00th=[ 4047], 99.50th=[ 4080], 99.90th=[ 4883], 99.95th=[ 5800], 00:22:46.334 | 99.99th=[ 6652] 00:22:46.334 bw ( KiB/s): min=67264, max=69360, per=100.00%, avg=68508.00, stdev=885.98, samples=4 00:22:46.334 iops : min=16816, max=17340, avg=17127.00, stdev=221.49, samples=4 00:22:46.334 lat (msec) : 2=0.01%, 4=98.73%, 10=1.26% 00:22:46.334 cpu : usr=99.55%, sys=0.05%, ctx=25, majf=0, minf=3 00:22:46.334 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:46.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:46.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:46.334 issued rwts: total=34256,34313,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:46.334 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:46.334 00:22:46.334 Run status group 0 (all jobs): 00:22:46.334 READ: bw=66.8MiB/s (70.0MB/s), 66.8MiB/s-66.8MiB/s (70.0MB/s-70.0MB/s), io=134MiB (140MB), run=2004-2004msec 00:22:46.334 WRITE: bw=66.9MiB/s (70.1MB/s), 66.9MiB/s-66.9MiB/s (70.1MB/s-70.1MB/s), io=134MiB (141MB), run=2004-2004msec 00:22:46.334 11:04:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:22:46.334 11:04:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:22:46.334 11:04:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:22:46.334 11:04:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:46.334 11:04:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:22:46.334 11:04:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:22:46.334 11:04:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:22:46.334 11:04:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:22:46.334 11:04:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:46.334 11:04:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:22:46.334 11:04:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:22:46.334 11:04:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:46.334 11:04:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:46.334 11:04:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:46.334 11:04:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:46.334 11:04:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:22:46.334 11:04:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:22:46.334 11:04:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:46.334 11:04:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:46.334 11:04:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:46.334 11:04:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:46.334 11:04:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:22:46.593 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:46.593 fio-3.35 00:22:46.593 Starting 1 thread 00:22:49.113 00:22:49.113 test: (groupid=0, jobs=1): err= 0: pid=1518614: Fri Nov 15 11:04:37 2024 00:22:49.113 read: IOPS=13.9k, BW=217MiB/s (227MB/s)(426MiB/1963msec) 00:22:49.113 slat (nsec): min=2364, max=50719, avg=2665.14, stdev=1079.09 00:22:49.113 clat (usec): min=524, max=8282, avg=1661.75, stdev=1321.38 00:22:49.113 lat (usec): min=527, max=8302, avg=1664.42, stdev=1321.72 00:22:49.113 clat percentiles (usec): 00:22:49.113 | 1.00th=[ 709], 5.00th=[ 816], 10.00th=[ 873], 20.00th=[ 955], 00:22:49.113 | 30.00th=[ 1029], 40.00th=[ 1123], 50.00th=[ 1221], 60.00th=[ 1336], 00:22:49.113 | 70.00th=[ 1467], 80.00th=[ 1647], 90.00th=[ 3818], 95.00th=[ 5145], 00:22:49.113 | 99.00th=[ 6587], 99.50th=[ 7111], 99.90th=[ 7701], 99.95th=[ 7832], 00:22:49.113 | 99.99th=[ 8225] 00:22:49.113 bw ( KiB/s): min=106400, max=110240, per=48.69%, avg=108088.00, stdev=1647.79, samples=4 00:22:49.113 iops : min= 6650, max= 6890, avg=6755.50, stdev=102.99, samples=4 00:22:49.113 write: IOPS=7855, BW=123MiB/s (129MB/s)(220MiB/1793msec); 0 zone resets 00:22:49.113 slat (usec): min=27, max=126, avg=29.66, stdev= 5.51 00:22:49.113 clat (usec): min=4792, max=21764, avg=13275.19, stdev=2009.62 00:22:49.113 lat (usec): min=4822, max=21792, avg=13304.85, stdev=2009.06 00:22:49.113 clat percentiles (usec): 00:22:49.113 | 1.00th=[ 7242], 5.00th=[10290], 10.00th=[10945], 20.00th=[11731], 00:22:49.113 | 30.00th=[12256], 40.00th=[12780], 50.00th=[13173], 60.00th=[13698], 00:22:49.113 | 70.00th=[14222], 80.00th=[14877], 90.00th=[15795], 95.00th=[16450], 00:22:49.113 | 99.00th=[18482], 99.50th=[19268], 99.90th=[20841], 99.95th=[21365], 00:22:49.113 | 99.99th=[21627] 00:22:49.113 bw ( KiB/s): min=107904, max=114080, per=88.89%, avg=111728.00, stdev=2669.97, samples=4 00:22:49.113 iops : min= 6744, max= 7130, avg=6983.00, stdev=166.87, samples=4 00:22:49.113 lat (usec) : 750=1.25%, 1000=15.82% 00:22:49.113 lat (msec) : 2=40.52%, 4=2.05%, 10=7.36%, 20=32.90%, 50=0.09% 00:22:49.113 cpu : usr=97.06%, sys=1.35%, ctx=183, majf=0, minf=3 00:22:49.113 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:22:49.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:49.113 issued rwts: total=27235,14085,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.113 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:49.113 00:22:49.113 Run status group 0 (all jobs): 00:22:49.113 READ: bw=217MiB/s (227MB/s), 217MiB/s-217MiB/s (227MB/s-227MB/s), io=426MiB (446MB), run=1963-1963msec 00:22:49.113 WRITE: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=220MiB (231MB), run=1793-1793msec 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:49.113 rmmod nvme_rdma 00:22:49.113 rmmod nvme_fabrics 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1517440 ']' 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1517440 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 1517440 ']' 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 1517440 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1517440 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1517440' 00:22:49.113 killing process with pid 1517440 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 1517440 00:22:49.113 11:04:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 1517440 00:22:49.370 11:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:49.370 11:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:22:49.370 00:22:49.370 real 0m13.062s 00:22:49.370 user 0m47.504s 00:22:49.370 sys 0m4.918s 00:22:49.370 11:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:49.370 11:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.370 ************************************ 00:22:49.370 END TEST nvmf_fio_host 00:22:49.370 ************************************ 00:22:49.370 11:04:38 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:22:49.370 11:04:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:49.370 11:04:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:49.370 11:04:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.627 ************************************ 00:22:49.627 START TEST nvmf_failover 00:22:49.627 ************************************ 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:22:49.627 * Looking for test storage... 00:22:49.627 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:49.627 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:49.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.627 --rc genhtml_branch_coverage=1 00:22:49.627 --rc genhtml_function_coverage=1 00:22:49.628 --rc genhtml_legend=1 00:22:49.628 --rc geninfo_all_blocks=1 00:22:49.628 --rc geninfo_unexecuted_blocks=1 00:22:49.628 00:22:49.628 ' 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:49.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.628 --rc genhtml_branch_coverage=1 00:22:49.628 --rc genhtml_function_coverage=1 00:22:49.628 --rc genhtml_legend=1 00:22:49.628 --rc geninfo_all_blocks=1 00:22:49.628 --rc geninfo_unexecuted_blocks=1 00:22:49.628 00:22:49.628 ' 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:49.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.628 --rc genhtml_branch_coverage=1 00:22:49.628 --rc genhtml_function_coverage=1 00:22:49.628 --rc genhtml_legend=1 00:22:49.628 --rc geninfo_all_blocks=1 00:22:49.628 --rc geninfo_unexecuted_blocks=1 00:22:49.628 00:22:49.628 ' 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:49.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.628 --rc genhtml_branch_coverage=1 00:22:49.628 --rc genhtml_function_coverage=1 00:22:49.628 --rc genhtml_legend=1 00:22:49.628 --rc geninfo_all_blocks=1 00:22:49.628 --rc geninfo_unexecuted_blocks=1 00:22:49.628 00:22:49.628 ' 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:49.628 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:49.628 11:04:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:22:54.887 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:22:54.887 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:22:54.887 Found net devices under 0000:af:00.0: mlx_0_0 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:22:54.887 Found net devices under 0000:af:00.1: mlx_0_1 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # rdma_device_init 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@530 -- # allocate_nic_ips 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:54.887 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:54.887 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:54.887 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:22:54.888 altname enp175s0f0np0 00:22:54.888 altname ens801f0np0 00:22:54.888 inet 192.168.100.8/24 scope global mlx_0_0 00:22:54.888 valid_lft forever preferred_lft forever 00:22:54.888 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:54.888 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:54.888 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:54.888 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:54.888 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:54.888 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:54.888 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:54.888 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:54.888 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:54.888 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:54.888 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:22:54.888 altname enp175s0f1np1 00:22:54.888 altname ens801f1np1 00:22:54.888 inet 192.168.100.9/24 scope global mlx_0_1 00:22:54.888 valid_lft forever preferred_lft forever 00:22:54.888 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:22:54.888 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:54.888 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:54.888 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:22:55.185 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:22:55.185 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:55.185 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:55.185 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:22:55.186 192.168.100.9' 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:22:55.186 192.168.100.9' 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # head -n 1 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:22:55.186 192.168.100.9' 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # tail -n +2 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # head -n 1 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1522084 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1522084 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 1522084 ']' 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:55.186 11:04:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:55.186 [2024-11-15 11:04:43.904253] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:22:55.186 [2024-11-15 11:04:43.904305] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.186 [2024-11-15 11:04:43.968012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:55.186 [2024-11-15 11:04:44.007783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.186 [2024-11-15 11:04:44.007824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.186 [2024-11-15 11:04:44.007831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.186 [2024-11-15 11:04:44.007849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.186 [2024-11-15 11:04:44.007854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.186 [2024-11-15 11:04:44.009343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.186 [2024-11-15 11:04:44.009427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.186 [2024-11-15 11:04:44.009428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.461 11:04:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:55.461 11:04:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:22:55.461 11:04:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:55.461 11:04:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:55.461 11:04:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:55.461 11:04:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.461 11:04:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:55.733 [2024-11-15 11:04:44.372402] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf399e0/0xf3ded0) succeed. 00:22:55.733 [2024-11-15 11:04:44.381441] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf3afd0/0xf7f570) succeed. 00:22:55.733 11:04:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:55.995 Malloc0 00:22:55.995 11:04:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:56.252 11:04:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:56.252 11:04:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:56.509 [2024-11-15 11:04:45.317843] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:56.509 11:04:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:22:56.767 [2024-11-15 11:04:45.522358] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:22:56.767 11:04:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:22:57.025 [2024-11-15 11:04:45.719048] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:22:57.025 11:04:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:57.025 11:04:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1522447 00:22:57.025 11:04:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.025 11:04:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1522447 /var/tmp/bdevperf.sock 00:22:57.025 11:04:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 1522447 ']' 00:22:57.025 11:04:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.025 11:04:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:57.025 11:04:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.025 11:04:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:57.025 11:04:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:57.283 11:04:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:57.283 11:04:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:22:57.283 11:04:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:57.540 NVMe0n1 00:22:57.540 11:04:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:57.797 00:22:57.797 11:04:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:57.797 11:04:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1522481 00:22:57.797 11:04:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:58.730 11:04:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:58.988 11:04:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:02.268 11:04:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:02.268 00:23:02.268 11:04:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:23:02.525 11:04:51 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:05.801 11:04:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:05.801 [2024-11-15 11:04:54.385610] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:05.801 11:04:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:06.731 11:04:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:23:06.988 11:04:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1522481 00:23:13.551 { 00:23:13.551 "results": [ 00:23:13.551 { 00:23:13.551 "job": "NVMe0n1", 00:23:13.551 "core_mask": "0x1", 00:23:13.551 "workload": "verify", 00:23:13.551 "status": "finished", 00:23:13.551 "verify_range": { 00:23:13.551 "start": 0, 00:23:13.551 "length": 16384 00:23:13.551 }, 00:23:13.551 "queue_depth": 128, 00:23:13.551 "io_size": 4096, 00:23:13.551 "runtime": 15.004692, 00:23:13.551 "iops": 13801.08302123096, 00:23:13.551 "mibps": 53.910480551683435, 00:23:13.551 "io_failed": 4700, 00:23:13.551 "io_timeout": 0, 00:23:13.551 "avg_latency_us": 9045.948651779945, 00:23:13.551 "min_latency_us": 365.0782608695652, 00:23:13.551 "max_latency_us": 1021221.8434782609 00:23:13.551 } 00:23:13.551 ], 00:23:13.551 "core_count": 1 00:23:13.551 } 00:23:13.551 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1522447 00:23:13.551 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 1522447 ']' 00:23:13.551 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 1522447 00:23:13.551 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:23:13.551 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:13.551 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1522447 00:23:13.551 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:13.551 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:13.551 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1522447' 00:23:13.551 killing process with pid 1522447 00:23:13.551 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 1522447 00:23:13.551 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 1522447 00:23:13.551 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:13.551 [2024-11-15 11:04:45.792204] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:23:13.551 [2024-11-15 11:04:45.792256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522447 ] 00:23:13.551 [2024-11-15 11:04:45.855525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.551 [2024-11-15 11:04:45.897270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.551 Running I/O for 15 seconds... 00:23:13.551 17408.00 IOPS, 68.00 MiB/s [2024-11-15T10:05:02.435Z] 9472.00 IOPS, 37.00 MiB/s [2024-11-15T10:05:02.435Z] [2024-11-15 11:04:48.699275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.551 [2024-11-15 11:04:48.699311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.551 [2024-11-15 11:04:48.699327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.551 [2024-11-15 11:04:48.699334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.551 [2024-11-15 11:04:48.699343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.551 [2024-11-15 11:04:48.699350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.551 [2024-11-15 11:04:48.699359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.552 [2024-11-15 11:04:48.699939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.552 [2024-11-15 11:04:48.699946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.699954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.553 [2024-11-15 11:04:48.699960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.699969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.553 [2024-11-15 11:04:48.699975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.699983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.553 [2024-11-15 11:04:48.699990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.699998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.553 [2024-11-15 11:04:48.700005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.553 [2024-11-15 11:04:48.700020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.553 [2024-11-15 11:04:48.700035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.553 [2024-11-15 11:04:48.700051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.553 [2024-11-15 11:04:48.700066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.553 [2024-11-15 11:04:48.700083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.553 [2024-11-15 11:04:48.700099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.553 [2024-11-15 11:04:48.700114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.553 [2024-11-15 11:04:48.700128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.553 [2024-11-15 11:04:48.700143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.553 [2024-11-15 11:04:48.700158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.553 [2024-11-15 11:04:48.700176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.553 [2024-11-15 11:04:48.700191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.553 [2024-11-15 11:04:48.700206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.553 [2024-11-15 11:04:48.700220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.553 [2024-11-15 11:04:48.700234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.553 [2024-11-15 11:04:48.700251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.553 [2024-11-15 11:04:48.700268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x189600 00:23:13.553 [2024-11-15 11:04:48.700283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x189600 00:23:13.553 [2024-11-15 11:04:48.700298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x189600 00:23:13.553 [2024-11-15 11:04:48.700313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x189600 00:23:13.553 [2024-11-15 11:04:48.700330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x189600 00:23:13.553 [2024-11-15 11:04:48.700345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x189600 00:23:13.553 [2024-11-15 11:04:48.700360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x189600 00:23:13.553 [2024-11-15 11:04:48.700376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x189600 00:23:13.553 [2024-11-15 11:04:48.700392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x189600 00:23:13.553 [2024-11-15 11:04:48.700407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x189600 00:23:13.553 [2024-11-15 11:04:48.700423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x189600 00:23:13.553 [2024-11-15 11:04:48.700439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x189600 00:23:13.553 [2024-11-15 11:04:48.700454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x189600 00:23:13.553 [2024-11-15 11:04:48.700469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x189600 00:23:13.553 [2024-11-15 11:04:48.700484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x189600 00:23:13.553 [2024-11-15 11:04:48.700501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x189600 00:23:13.553 [2024-11-15 11:04:48.700516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.553 [2024-11-15 11:04:48.700524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.700991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.700998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.701006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.701013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.701021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.701028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.701036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.701042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.701051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.701058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.701066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x189600 00:23:13.554 [2024-11-15 11:04:48.701073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.554 [2024-11-15 11:04:48.701081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x189600 00:23:13.555 [2024-11-15 11:04:48.701087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:48.701096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x189600 00:23:13.555 [2024-11-15 11:04:48.701103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:48.701111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x189600 00:23:13.555 [2024-11-15 11:04:48.701118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:48.701128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x189600 00:23:13.555 [2024-11-15 11:04:48.701135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:48.701143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x189600 00:23:13.555 [2024-11-15 11:04:48.701149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:48.701158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x189600 00:23:13.555 [2024-11-15 11:04:48.701168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:48.701177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x189600 00:23:13.555 [2024-11-15 11:04:48.701184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:48.701192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x189600 00:23:13.555 [2024-11-15 11:04:48.701199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:48.701207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x189600 00:23:13.555 [2024-11-15 11:04:48.701214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:48.701222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x189600 00:23:13.555 [2024-11-15 11:04:48.701230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:48.701238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x189600 00:23:13.555 [2024-11-15 11:04:48.701245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:48.701253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x189600 00:23:13.555 [2024-11-15 11:04:48.701259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:48.701268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x189600 00:23:13.555 [2024-11-15 11:04:48.701275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:48.702539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.555 [2024-11-15 11:04:48.702551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.555 [2024-11-15 11:04:48.702558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21000 len:8 PRP1 0x0 PRP2 0x0 00:23:13.555 [2024-11-15 11:04:48.702566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:48.702617] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:23:13.555 [2024-11-15 11:04:48.702630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:13.555 [2024-11-15 11:04:48.705561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:13.555 [2024-11-15 11:04:48.720053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:23:13.555 [2024-11-15 11:04:48.760191] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:13.555 11208.00 IOPS, 43.78 MiB/s [2024-11-15T10:05:02.439Z] 12729.75 IOPS, 49.73 MiB/s [2024-11-15T10:05:02.439Z] 12114.60 IOPS, 47.32 MiB/s [2024-11-15T10:05:02.439Z] [2024-11-15 11:04:52.184693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x189c00 00:23:13.555 [2024-11-15 11:04:52.184729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:52.184747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x189c00 00:23:13.555 [2024-11-15 11:04:52.184755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:52.184765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x189c00 00:23:13.555 [2024-11-15 11:04:52.184774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:52.184784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x189c00 00:23:13.555 [2024-11-15 11:04:52.184792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:52.184801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x189c00 00:23:13.555 [2024-11-15 11:04:52.184808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:52.184816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x189c00 00:23:13.555 [2024-11-15 11:04:52.184823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:52.184832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x189c00 00:23:13.555 [2024-11-15 11:04:52.184839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:52.184853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x189c00 00:23:13.555 [2024-11-15 11:04:52.184861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:52.184870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x189c00 00:23:13.555 [2024-11-15 11:04:52.184883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:52.184892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x189c00 00:23:13.555 [2024-11-15 11:04:52.184898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:52.184908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x189c00 00:23:13.555 [2024-11-15 11:04:52.184915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:52.184923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x189c00 00:23:13.555 [2024-11-15 11:04:52.184931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:52.184940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x189c00 00:23:13.555 [2024-11-15 11:04:52.184946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:52.184957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x189c00 00:23:13.555 [2024-11-15 11:04:52.184964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:52.184973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x189c00 00:23:13.555 [2024-11-15 11:04:52.184980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:52.184989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:103448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x189c00 00:23:13.555 [2024-11-15 11:04:52.184995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:52.185004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x189c00 00:23:13.555 [2024-11-15 11:04:52.185010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:52.185019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x189c00 00:23:13.555 [2024-11-15 11:04:52.185027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.555 [2024-11-15 11:04:52.185037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x189c00 00:23:13.556 [2024-11-15 11:04:52.185044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x189c00 00:23:13.556 [2024-11-15 11:04:52.185060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x189c00 00:23:13.556 [2024-11-15 11:04:52.185080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x189c00 00:23:13.556 [2024-11-15 11:04:52.185098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x189c00 00:23:13.556 [2024-11-15 11:04:52.185114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x189c00 00:23:13.556 [2024-11-15 11:04:52.185130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x189c00 00:23:13.556 [2024-11-15 11:04:52.185145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x189c00 00:23:13.556 [2024-11-15 11:04:52.185160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x189c00 00:23:13.556 [2024-11-15 11:04:52.185179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x189c00 00:23:13.556 [2024-11-15 11:04:52.185194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x189c00 00:23:13.556 [2024-11-15 11:04:52.185210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x189c00 00:23:13.556 [2024-11-15 11:04:52.185225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x189c00 00:23:13.556 [2024-11-15 11:04:52.185240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x189c00 00:23:13.556 [2024-11-15 11:04:52.185258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x189c00 00:23:13.556 [2024-11-15 11:04:52.185274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x189c00 00:23:13.556 [2024-11-15 11:04:52.185291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x189c00 00:23:13.556 [2024-11-15 11:04:52.185306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.556 [2024-11-15 11:04:52.185322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.556 [2024-11-15 11:04:52.185337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.556 [2024-11-15 11:04:52.185352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.556 [2024-11-15 11:04:52.185366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.556 [2024-11-15 11:04:52.185381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.556 [2024-11-15 11:04:52.185397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.556 [2024-11-15 11:04:52.185412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.556 [2024-11-15 11:04:52.185429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.556 [2024-11-15 11:04:52.185445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.556 [2024-11-15 11:04:52.185459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.556 [2024-11-15 11:04:52.185475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.556 [2024-11-15 11:04:52.185490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.556 [2024-11-15 11:04:52.185505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.556 [2024-11-15 11:04:52.185520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.556 [2024-11-15 11:04:52.185528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x189c00 00:23:13.557 [2024-11-15 11:04:52.185809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x189c00 00:23:13.557 [2024-11-15 11:04:52.185825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x189c00 00:23:13.557 [2024-11-15 11:04:52.185841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x189c00 00:23:13.557 [2024-11-15 11:04:52.185856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x189c00 00:23:13.557 [2024-11-15 11:04:52.185871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x189c00 00:23:13.557 [2024-11-15 11:04:52.185886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x189c00 00:23:13.557 [2024-11-15 11:04:52.185902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.185985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.185993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.186001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.186007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.186016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.186023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.186031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.186038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.186046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.186052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.186060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.186067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.186075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.186082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.186091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.557 [2024-11-15 11:04:52.186097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.557 [2024-11-15 11:04:52.186106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.558 [2024-11-15 11:04:52.186112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.558 [2024-11-15 11:04:52.186129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.558 [2024-11-15 11:04:52.186145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.558 [2024-11-15 11:04:52.186159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.558 [2024-11-15 11:04:52.186180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.558 [2024-11-15 11:04:52.186196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.558 [2024-11-15 11:04:52.186211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.558 [2024-11-15 11:04:52.186226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.558 [2024-11-15 11:04:52.186241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.558 [2024-11-15 11:04:52.186257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.558 [2024-11-15 11:04:52.186273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x189c00 00:23:13.558 [2024-11-15 11:04:52.186287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.558 [2024-11-15 11:04:52.186302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.558 [2024-11-15 11:04:52.186318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.558 [2024-11-15 11:04:52.186333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.558 [2024-11-15 11:04:52.186349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.558 [2024-11-15 11:04:52.186366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.558 [2024-11-15 11:04:52.186389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.558 [2024-11-15 11:04:52.186406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.558 [2024-11-15 11:04:52.186422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x189c00 00:23:13.558 [2024-11-15 11:04:52.186438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x189c00 00:23:13.558 [2024-11-15 11:04:52.186453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x189c00 00:23:13.558 [2024-11-15 11:04:52.186467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x189c00 00:23:13.558 [2024-11-15 11:04:52.186482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x189c00 00:23:13.558 [2024-11-15 11:04:52.186498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x189c00 00:23:13.558 [2024-11-15 11:04:52.186513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x189c00 00:23:13.558 [2024-11-15 11:04:52.186528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x189c00 00:23:13.558 [2024-11-15 11:04:52.186543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x189c00 00:23:13.558 [2024-11-15 11:04:52.186559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x189c00 00:23:13.558 [2024-11-15 11:04:52.186576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x189c00 00:23:13.558 [2024-11-15 11:04:52.186591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x189c00 00:23:13.558 [2024-11-15 11:04:52.186607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x189c00 00:23:13.558 [2024-11-15 11:04:52.186622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x189c00 00:23:13.558 [2024-11-15 11:04:52.186637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x189c00 00:23:13.558 [2024-11-15 11:04:52.186654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x189c00 00:23:13.558 [2024-11-15 11:04:52.186670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.558 [2024-11-15 11:04:52.186678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.558 [2024-11-15 11:04:52.186685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:52.186694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:52.186700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:52.186708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:52.186715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:52.186724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:52.186731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:52.187947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.559 [2024-11-15 11:04:52.187962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.559 [2024-11-15 11:04:52.187969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104344 len:8 PRP1 0x0 PRP2 0x0 00:23:13.559 [2024-11-15 11:04:52.187977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:52.188020] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:23:13.559 [2024-11-15 11:04:52.188030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:13.559 [2024-11-15 11:04:52.190930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:13.559 [2024-11-15 11:04:52.205269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0 00:23:13.559 [2024-11-15 11:04:52.250038] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:13.559 11172.83 IOPS, 43.64 MiB/s [2024-11-15T10:05:02.443Z] 12100.86 IOPS, 47.27 MiB/s [2024-11-15T10:05:02.443Z] 12796.62 IOPS, 49.99 MiB/s [2024-11-15T10:05:02.443Z] 13267.67 IOPS, 51.83 MiB/s [2024-11-15T10:05:02.443Z] [2024-11-15 11:04:56.599649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x189600 00:23:13.559 [2024-11-15 11:04:56.599686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.599702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x189600 00:23:13.559 [2024-11-15 11:04:56.599710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.599719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x189600 00:23:13.559 [2024-11-15 11:04:56.599726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.599735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x189600 00:23:13.559 [2024-11-15 11:04:56.599741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.599750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.599757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.599766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.599773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.599781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.599788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.599796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.599803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.599816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.599823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.599831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.599838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.599846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.599854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.599863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.599870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.599879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.599886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.599894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.599903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.599911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.599919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.599928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.599934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.599943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.599950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.599958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.599965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.599974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.599983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.599993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.600000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.600009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.600017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.600028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.600035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.600045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.600052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.600062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.600069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.600077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.600085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.600093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.600100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.600108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.600114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.600123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.600130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.600139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.600146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.600154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.559 [2024-11-15 11:04:56.600161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.559 [2024-11-15 11:04:56.600181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.560 [2024-11-15 11:04:56.600188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.560 [2024-11-15 11:04:56.600203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.560 [2024-11-15 11:04:56.600218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.560 [2024-11-15 11:04:56.600236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.560 [2024-11-15 11:04:56.600251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.560 [2024-11-15 11:04:56.600266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.560 [2024-11-15 11:04:56.600281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.560 [2024-11-15 11:04:56.600295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.560 [2024-11-15 11:04:56.600310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.560 [2024-11-15 11:04:56.600326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.560 [2024-11-15 11:04:56.600340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.560 [2024-11-15 11:04:56.600355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.560 [2024-11-15 11:04:56.600371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.560 [2024-11-15 11:04:56.600386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x189600 00:23:13.560 [2024-11-15 11:04:56.600401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x189600 00:23:13.560 [2024-11-15 11:04:56.600418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x189600 00:23:13.560 [2024-11-15 11:04:56.600433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x189600 00:23:13.560 [2024-11-15 11:04:56.600449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x189600 00:23:13.560 [2024-11-15 11:04:56.600465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x189600 00:23:13.560 [2024-11-15 11:04:56.600480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x189600 00:23:13.560 [2024-11-15 11:04:56.600495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x189600 00:23:13.560 [2024-11-15 11:04:56.600510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x189600 00:23:13.560 [2024-11-15 11:04:56.600525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x189600 00:23:13.560 [2024-11-15 11:04:56.600541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x189600 00:23:13.560 [2024-11-15 11:04:56.600555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x189600 00:23:13.560 [2024-11-15 11:04:56.600571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x189600 00:23:13.560 [2024-11-15 11:04:56.600588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x189600 00:23:13.560 [2024-11-15 11:04:56.600604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x189600 00:23:13.560 [2024-11-15 11:04:56.600620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x189600 00:23:13.560 [2024-11-15 11:04:56.600636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x189600 00:23:13.560 [2024-11-15 11:04:56.600651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x189600 00:23:13.560 [2024-11-15 11:04:56.600666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.560 [2024-11-15 11:04:56.600674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.600681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.600690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.600697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.600706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.600713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.600722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.600728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.600737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.600745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.600753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.600761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.600771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.600778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.600786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.600793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.600802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.600809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.600818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.600825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.600833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.600840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.600848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.600856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.600864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.600871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.600880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.600886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.600894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.561 [2024-11-15 11:04:56.600901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.600909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.561 [2024-11-15 11:04:56.600916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.600925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.561 [2024-11-15 11:04:56.600931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.600940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.561 [2024-11-15 11:04:56.600948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.600956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.561 [2024-11-15 11:04:56.600962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.600972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.600980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.600989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.600996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.601005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.601011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.601020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.601028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.601037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.601044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.601053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.601059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.601068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.601074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.601083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.601090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.601099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.601106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.601115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.601121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.601131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.601139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.601147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.601154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.601167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.601174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.601182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.601188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.601197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.601204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.601214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.601221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.601229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x189600 00:23:13.561 [2024-11-15 11:04:56.601237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.561 [2024-11-15 11:04:56.601246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x189600 00:23:13.562 [2024-11-15 11:04:56.601252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x189600 00:23:13.562 [2024-11-15 11:04:56.601268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x189600 00:23:13.562 [2024-11-15 11:04:56.601284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x189600 00:23:13.562 [2024-11-15 11:04:56.601299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x189600 00:23:13.562 [2024-11-15 11:04:56.601318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x189600 00:23:13.562 [2024-11-15 11:04:56.601335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x189600 00:23:13.562 [2024-11-15 11:04:56.601350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.562 [2024-11-15 11:04:56.601366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.562 [2024-11-15 11:04:56.601382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.562 [2024-11-15 11:04:56.601397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x189600 00:23:13.562 [2024-11-15 11:04:56.601412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x189600 00:23:13.562 [2024-11-15 11:04:56.601427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x189600 00:23:13.562 [2024-11-15 11:04:56.601442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x189600 00:23:13.562 [2024-11-15 11:04:56.601458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x189600 00:23:13.562 [2024-11-15 11:04:56.601473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x189600 00:23:13.562 [2024-11-15 11:04:56.601488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x189600 00:23:13.562 [2024-11-15 11:04:56.601506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x189600 00:23:13.562 [2024-11-15 11:04:56.601521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.562 [2024-11-15 11:04:56.601536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.562 [2024-11-15 11:04:56.601551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.562 [2024-11-15 11:04:56.601566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.562 [2024-11-15 11:04:56.601581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.562 [2024-11-15 11:04:56.601596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.562 [2024-11-15 11:04:56.601612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.562 [2024-11-15 11:04:56.601627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.562 [2024-11-15 11:04:56.601642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.562 [2024-11-15 11:04:56.601657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.562 [2024-11-15 11:04:56.601672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.601682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.562 [2024-11-15 11:04:56.601689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e1f78000 sqhd:7210 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.603150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.562 [2024-11-15 11:04:56.603167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.562 [2024-11-15 11:04:56.603175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64176 len:8 PRP1 0x0 PRP2 0x0 00:23:13.562 [2024-11-15 11:04:56.603182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.562 [2024-11-15 11:04:56.603223] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:23:13.562 [2024-11-15 11:04:56.603233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:13.562 [2024-11-15 11:04:56.606157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:13.562 [2024-11-15 11:04:56.620508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0 00:23:13.562 11940.90 IOPS, 46.64 MiB/s [2024-11-15T10:05:02.446Z] [2024-11-15 11:04:56.662717] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:13.562 12402.27 IOPS, 48.45 MiB/s [2024-11-15T10:05:02.446Z] 12838.58 IOPS, 50.15 MiB/s [2024-11-15T10:05:02.446Z] 13208.69 IOPS, 51.60 MiB/s [2024-11-15T10:05:02.446Z] 13523.64 IOPS, 52.83 MiB/s [2024-11-15T10:05:02.446Z] 13799.53 IOPS, 53.90 MiB/s 00:23:13.562 Latency(us) 00:23:13.562 [2024-11-15T10:05:02.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.562 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:13.562 Verification LBA range: start 0x0 length 0x4000 00:23:13.562 NVMe0n1 : 15.00 13801.08 53.91 313.24 0.00 9045.95 365.08 1021221.84 00:23:13.562 [2024-11-15T10:05:02.446Z] =================================================================================================================== 00:23:13.562 [2024-11-15T10:05:02.446Z] Total : 13801.08 53.91 313.24 0.00 9045.95 365.08 1021221.84 00:23:13.562 Received shutdown signal, test time was about 15.000000 seconds 00:23:13.562 00:23:13.562 Latency(us) 00:23:13.562 [2024-11-15T10:05:02.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.562 [2024-11-15T10:05:02.446Z] =================================================================================================================== 00:23:13.563 [2024-11-15T10:05:02.447Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:13.563 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:13.563 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:13.563 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:13.563 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1525014 00:23:13.563 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1525014 /var/tmp/bdevperf.sock 00:23:13.563 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:13.563 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 1525014 ']' 00:23:13.563 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.563 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:13.563 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.563 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:13.563 11:05:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:13.563 11:05:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:13.563 11:05:02 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:23:13.563 11:05:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:23:13.563 [2024-11-15 11:05:02.333072] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:23:13.563 11:05:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:23:13.820 [2024-11-15 11:05:02.545819] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:23:13.820 11:05:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:14.078 NVMe0n1 00:23:14.078 11:05:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:14.336 00:23:14.336 11:05:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:14.595 00:23:14.595 11:05:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:14.595 11:05:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:14.853 11:05:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:15.111 11:05:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:18.389 11:05:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:18.389 11:05:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:18.389 11:05:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1525944 00:23:18.389 11:05:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:18.389 11:05:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1525944 00:23:19.324 { 00:23:19.324 "results": [ 00:23:19.324 { 00:23:19.324 "job": "NVMe0n1", 00:23:19.324 "core_mask": "0x1", 00:23:19.324 "workload": "verify", 00:23:19.324 "status": "finished", 00:23:19.324 "verify_range": { 00:23:19.324 "start": 0, 00:23:19.324 "length": 16384 00:23:19.324 }, 00:23:19.324 "queue_depth": 128, 00:23:19.324 "io_size": 4096, 00:23:19.324 "runtime": 1.008676, 00:23:19.324 "iops": 17512.06532127264, 00:23:19.324 "mibps": 68.40650516122125, 00:23:19.324 "io_failed": 0, 00:23:19.324 "io_timeout": 0, 00:23:19.324 "avg_latency_us": 7269.462785129174, 00:23:19.324 "min_latency_us": 2635.686956521739, 00:23:19.324 "max_latency_us": 10257.808695652175 00:23:19.324 } 00:23:19.324 ], 00:23:19.324 "core_count": 1 00:23:19.324 } 00:23:19.324 11:05:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:19.324 [2024-11-15 11:05:01.952691] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:23:19.324 [2024-11-15 11:05:01.952744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1525014 ] 00:23:19.324 [2024-11-15 11:05:02.016306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.324 [2024-11-15 11:05:02.054312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.324 [2024-11-15 11:05:03.735447] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:23:19.324 [2024-11-15 11:05:03.735927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:19.324 [2024-11-15 11:05:03.735957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:19.324 [2024-11-15 11:05:03.754928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0 00:23:19.324 [2024-11-15 11:05:03.771504] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:19.324 Running I/O for 1 seconds... 00:23:19.324 17508.00 IOPS, 68.39 MiB/s 00:23:19.324 Latency(us) 00:23:19.324 [2024-11-15T10:05:08.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.324 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:19.324 Verification LBA range: start 0x0 length 0x4000 00:23:19.324 NVMe0n1 : 1.01 17512.07 68.41 0.00 0.00 7269.46 2635.69 10257.81 00:23:19.324 [2024-11-15T10:05:08.208Z] =================================================================================================================== 00:23:19.324 [2024-11-15T10:05:08.208Z] Total : 17512.07 68.41 0.00 0.00 7269.46 2635.69 10257.81 00:23:19.324 11:05:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:19.324 11:05:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:19.582 11:05:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:19.839 11:05:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:19.839 11:05:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:19.839 11:05:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:20.097 11:05:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:23.378 11:05:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:23.378 11:05:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:23.378 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1525014 00:23:23.378 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 1525014 ']' 00:23:23.378 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 1525014 00:23:23.378 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:23:23.378 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:23.378 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1525014 00:23:23.378 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:23.378 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:23.378 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1525014' 00:23:23.378 killing process with pid 1525014 00:23:23.378 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 1525014 00:23:23.378 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 1525014 00:23:23.636 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:23.636 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:23.894 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:23.894 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:23.894 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:23.894 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:23.894 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:23.894 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:23.894 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:23.894 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:23.894 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:23.894 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:23.894 rmmod nvme_rdma 00:23:23.894 rmmod nvme_fabrics 00:23:23.894 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:23.894 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:23.894 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:23.894 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1522084 ']' 00:23:23.894 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1522084 00:23:23.894 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 1522084 ']' 00:23:23.894 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 1522084 00:23:23.894 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:23:23.895 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:23.895 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1522084 00:23:23.895 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:23.895 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:23.895 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1522084' 00:23:23.895 killing process with pid 1522084 00:23:23.895 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 1522084 00:23:23.895 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 1522084 00:23:24.153 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:24.153 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:23:24.153 00:23:24.153 real 0m34.631s 00:23:24.153 user 1m59.116s 00:23:24.153 sys 0m5.913s 00:23:24.153 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:24.153 11:05:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:24.153 ************************************ 00:23:24.153 END TEST nvmf_failover 00:23:24.153 ************************************ 00:23:24.153 11:05:12 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:23:24.153 11:05:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:24.153 11:05:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:24.153 11:05:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.153 ************************************ 00:23:24.153 START TEST nvmf_host_discovery 00:23:24.153 ************************************ 00:23:24.153 11:05:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:23:24.413 * Looking for test storage... 00:23:24.413 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:24.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.413 --rc genhtml_branch_coverage=1 00:23:24.413 --rc genhtml_function_coverage=1 00:23:24.413 --rc genhtml_legend=1 00:23:24.413 --rc geninfo_all_blocks=1 00:23:24.413 --rc geninfo_unexecuted_blocks=1 00:23:24.413 00:23:24.413 ' 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:24.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.413 --rc genhtml_branch_coverage=1 00:23:24.413 --rc genhtml_function_coverage=1 00:23:24.413 --rc genhtml_legend=1 00:23:24.413 --rc geninfo_all_blocks=1 00:23:24.413 --rc geninfo_unexecuted_blocks=1 00:23:24.413 00:23:24.413 ' 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:24.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.413 --rc genhtml_branch_coverage=1 00:23:24.413 --rc genhtml_function_coverage=1 00:23:24.413 --rc genhtml_legend=1 00:23:24.413 --rc geninfo_all_blocks=1 00:23:24.413 --rc geninfo_unexecuted_blocks=1 00:23:24.413 00:23:24.413 ' 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:24.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.413 --rc genhtml_branch_coverage=1 00:23:24.413 --rc genhtml_function_coverage=1 00:23:24.413 --rc genhtml_legend=1 00:23:24.413 --rc geninfo_all_blocks=1 00:23:24.413 --rc geninfo_unexecuted_blocks=1 00:23:24.413 00:23:24.413 ' 00:23:24.413 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:24.414 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:23:24.414 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:23:24.414 00:23:24.414 real 0m0.199s 00:23:24.414 user 0m0.131s 00:23:24.414 sys 0m0.078s 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.414 ************************************ 00:23:24.414 END TEST nvmf_host_discovery 00:23:24.414 ************************************ 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.414 ************************************ 00:23:24.414 START TEST nvmf_host_multipath_status 00:23:24.414 ************************************ 00:23:24.414 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:23:24.674 * Looking for test storage... 00:23:24.674 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:24.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.674 --rc genhtml_branch_coverage=1 00:23:24.674 --rc genhtml_function_coverage=1 00:23:24.674 --rc genhtml_legend=1 00:23:24.674 --rc geninfo_all_blocks=1 00:23:24.674 --rc geninfo_unexecuted_blocks=1 00:23:24.674 00:23:24.674 ' 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:24.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.674 --rc genhtml_branch_coverage=1 00:23:24.674 --rc genhtml_function_coverage=1 00:23:24.674 --rc genhtml_legend=1 00:23:24.674 --rc geninfo_all_blocks=1 00:23:24.674 --rc geninfo_unexecuted_blocks=1 00:23:24.674 00:23:24.674 ' 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:24.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.674 --rc genhtml_branch_coverage=1 00:23:24.674 --rc genhtml_function_coverage=1 00:23:24.674 --rc genhtml_legend=1 00:23:24.674 --rc geninfo_all_blocks=1 00:23:24.674 --rc geninfo_unexecuted_blocks=1 00:23:24.674 00:23:24.674 ' 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:24.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.674 --rc genhtml_branch_coverage=1 00:23:24.674 --rc genhtml_function_coverage=1 00:23:24.674 --rc genhtml_legend=1 00:23:24.674 --rc geninfo_all_blocks=1 00:23:24.674 --rc geninfo_unexecuted_blocks=1 00:23:24.674 00:23:24.674 ' 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.674 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:24.675 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:24.675 11:05:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:29.947 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:23:30.206 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:23:30.206 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:23:30.206 Found net devices under 0000:af:00.0: mlx_0_0 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:23:30.206 Found net devices under 0000:af:00.1: mlx_0_1 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # rdma_device_init 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@530 -- # allocate_nic_ips 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:30.206 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:23:30.206 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:30.206 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:23:30.206 altname enp175s0f0np0 00:23:30.206 altname ens801f0np0 00:23:30.206 inet 192.168.100.8/24 scope global mlx_0_0 00:23:30.206 valid_lft forever preferred_lft forever 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:23:30.207 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:30.207 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:23:30.207 altname enp175s0f1np1 00:23:30.207 altname ens801f1np1 00:23:30.207 inet 192.168.100.9/24 scope global mlx_0_1 00:23:30.207 valid_lft forever preferred_lft forever 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:30.207 11:05:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:23:30.207 192.168.100.9' 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:23:30.207 192.168.100.9' 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # head -n 1 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:23:30.207 192.168.100.9' 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # head -n 1 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # tail -n +2 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1530053 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1530053 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 1530053 ']' 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:30.207 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:30.465 [2024-11-15 11:05:19.094546] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:23:30.465 [2024-11-15 11:05:19.094592] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.465 [2024-11-15 11:05:19.161684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:30.465 [2024-11-15 11:05:19.212754] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.465 [2024-11-15 11:05:19.212801] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.465 [2024-11-15 11:05:19.212812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.465 [2024-11-15 11:05:19.212821] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.465 [2024-11-15 11:05:19.212829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.465 [2024-11-15 11:05:19.214367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.465 [2024-11-15 11:05:19.214373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.465 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:30.465 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:23:30.465 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:30.465 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:30.465 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:30.722 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.722 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1530053 00:23:30.722 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:30.723 [2024-11-15 11:05:19.556208] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b09b50/0x1b0e040) succeed. 00:23:30.723 [2024-11-15 11:05:19.565174] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b0b0a0/0x1b4f6e0) succeed. 00:23:30.979 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:30.979 Malloc0 00:23:30.979 11:05:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:31.236 11:05:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:31.492 11:05:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:31.748 [2024-11-15 11:05:20.397419] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:31.748 11:05:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:23:31.748 [2024-11-15 11:05:20.589809] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:23:31.748 11:05:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:31.748 11:05:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1530315 00:23:31.749 11:05:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:31.749 11:05:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1530315 /var/tmp/bdevperf.sock 00:23:31.749 11:05:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 1530315 ']' 00:23:31.749 11:05:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.749 11:05:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:31.749 11:05:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.749 11:05:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:31.749 11:05:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:32.005 11:05:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:32.005 11:05:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:23:32.005 11:05:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:32.261 11:05:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:32.518 Nvme0n1 00:23:32.518 11:05:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:32.774 Nvme0n1 00:23:32.774 11:05:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:32.774 11:05:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:35.294 11:05:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:35.294 11:05:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:23:35.294 11:05:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:23:35.295 11:05:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:36.232 11:05:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:36.232 11:05:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:36.232 11:05:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.232 11:05:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:36.490 11:05:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.490 11:05:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:36.490 11:05:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.490 11:05:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:36.746 11:05:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:36.746 11:05:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:36.746 11:05:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.746 11:05:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:36.746 11:05:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.746 11:05:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:36.746 11:05:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.746 11:05:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:37.004 11:05:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.004 11:05:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:37.004 11:05:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.004 11:05:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:37.261 11:05:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.261 11:05:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:37.261 11:05:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.261 11:05:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:37.519 11:05:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.519 11:05:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:37.519 11:05:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:23:37.519 11:05:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:23:37.776 11:05:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:38.706 11:05:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:38.706 11:05:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:38.706 11:05:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.706 11:05:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:38.964 11:05:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:38.964 11:05:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:38.964 11:05:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.964 11:05:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:39.221 11:05:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.221 11:05:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:39.221 11:05:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.221 11:05:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:39.477 11:05:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.477 11:05:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:39.477 11:05:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.477 11:05:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:39.477 11:05:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.477 11:05:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:39.734 11:05:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.734 11:05:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:39.734 11:05:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.734 11:05:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:39.734 11:05:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.734 11:05:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:39.991 11:05:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.991 11:05:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:39.991 11:05:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:23:40.248 11:05:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:23:40.506 11:05:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:41.441 11:05:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:41.441 11:05:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:41.441 11:05:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.441 11:05:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:41.698 11:05:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.698 11:05:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:41.698 11:05:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.698 11:05:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:41.698 11:05:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:41.698 11:05:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:41.698 11:05:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.698 11:05:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:41.955 11:05:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.955 11:05:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:41.955 11:05:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:41.955 11:05:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.212 11:05:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.212 11:05:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:42.212 11:05:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:42.212 11:05:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.469 11:05:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.469 11:05:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:42.469 11:05:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.469 11:05:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:42.469 11:05:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.469 11:05:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:42.470 11:05:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:23:42.727 11:05:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:23:42.983 11:05:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:43.915 11:05:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:43.915 11:05:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:43.915 11:05:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.915 11:05:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:44.172 11:05:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.172 11:05:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:44.172 11:05:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.172 11:05:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:44.429 11:05:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:44.429 11:05:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:44.429 11:05:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.429 11:05:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:44.685 11:05:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.685 11:05:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:44.685 11:05:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.685 11:05:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:44.685 11:05:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.685 11:05:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:44.685 11:05:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.685 11:05:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:44.942 11:05:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.942 11:05:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:44.942 11:05:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.942 11:05:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:45.200 11:05:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:45.200 11:05:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:45.200 11:05:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:23:45.456 11:05:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:23:45.456 11:05:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:46.835 11:05:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:46.835 11:05:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:46.835 11:05:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.835 11:05:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:46.835 11:05:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:46.835 11:05:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:46.835 11:05:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.835 11:05:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:46.835 11:05:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:46.835 11:05:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:46.835 11:05:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:46.835 11:05:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.105 11:05:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.105 11:05:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:47.105 11:05:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.105 11:05:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:47.376 11:05:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.376 11:05:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:47.376 11:05:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.376 11:05:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:47.650 11:05:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:47.650 11:05:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:47.650 11:05:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.650 11:05:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:47.650 11:05:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:47.650 11:05:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:47.650 11:05:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:23:47.907 11:05:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:23:48.164 11:05:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:49.094 11:05:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:49.094 11:05:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:49.094 11:05:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.094 11:05:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:49.351 11:05:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:49.351 11:05:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:49.351 11:05:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.351 11:05:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:49.608 11:05:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.608 11:05:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:49.608 11:05:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.609 11:05:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:49.609 11:05:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.609 11:05:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:49.609 11:05:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.609 11:05:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:49.867 11:05:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.867 11:05:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:49.867 11:05:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.867 11:05:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:50.124 11:05:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:50.124 11:05:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:50.124 11:05:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.124 11:05:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:50.381 11:05:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.381 11:05:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:50.381 11:05:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:50.381 11:05:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:23:50.637 11:05:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:23:50.894 11:05:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:51.825 11:05:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:51.825 11:05:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:51.825 11:05:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.825 11:05:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:52.082 11:05:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.082 11:05:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:52.082 11:05:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.082 11:05:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:52.339 11:05:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.339 11:05:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:52.339 11:05:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.339 11:05:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:52.596 11:05:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.596 11:05:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:52.596 11:05:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.596 11:05:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:52.596 11:05:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.596 11:05:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:52.596 11:05:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.596 11:05:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:52.854 11:05:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.854 11:05:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:52.854 11:05:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.854 11:05:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:53.111 11:05:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.111 11:05:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:53.111 11:05:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:23:53.368 11:05:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:23:53.625 11:05:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:54.556 11:05:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:54.556 11:05:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:54.556 11:05:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.556 11:05:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:54.814 11:05:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:54.814 11:05:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:54.814 11:05:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.814 11:05:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:54.814 11:05:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.814 11:05:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:54.814 11:05:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.814 11:05:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:55.071 11:05:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.071 11:05:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:55.071 11:05:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.071 11:05:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:55.328 11:05:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.328 11:05:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:55.328 11:05:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.328 11:05:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:55.585 11:05:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.585 11:05:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:55.585 11:05:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:55.585 11:05:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.585 11:05:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.585 11:05:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:55.585 11:05:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:23:55.842 11:05:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:23:56.098 11:05:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:57.027 11:05:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:57.027 11:05:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:57.027 11:05:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.027 11:05:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:57.283 11:05:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.283 11:05:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:57.283 11:05:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.283 11:05:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:57.540 11:05:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.540 11:05:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:57.540 11:05:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.540 11:05:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:57.797 11:05:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.797 11:05:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:57.797 11:05:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.797 11:05:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:57.797 11:05:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.797 11:05:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:57.797 11:05:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.797 11:05:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:58.054 11:05:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.054 11:05:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:58.054 11:05:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.054 11:05:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:58.312 11:05:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.312 11:05:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:58.312 11:05:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:23:58.569 11:05:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:23:58.569 11:05:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:59.937 11:05:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:59.937 11:05:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:59.937 11:05:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.937 11:05:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:59.937 11:05:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.937 11:05:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:59.937 11:05:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.937 11:05:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:59.937 11:05:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:59.937 11:05:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:59.937 11:05:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.937 11:05:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:00.194 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.194 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:00.194 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.194 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:00.450 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.450 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:00.450 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.450 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:00.707 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.707 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:00.707 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.707 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:00.964 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:00.964 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1530315 00:24:00.964 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 1530315 ']' 00:24:00.964 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 1530315 00:24:00.964 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:24:00.964 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:00.964 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1530315 00:24:00.964 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:00.964 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:00.964 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1530315' 00:24:00.964 killing process with pid 1530315 00:24:00.964 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 1530315 00:24:00.964 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 1530315 00:24:00.964 { 00:24:00.964 "results": [ 00:24:00.964 { 00:24:00.964 "job": "Nvme0n1", 00:24:00.964 "core_mask": "0x4", 00:24:00.964 "workload": "verify", 00:24:00.964 "status": "terminated", 00:24:00.964 "verify_range": { 00:24:00.964 "start": 0, 00:24:00.964 "length": 16384 00:24:00.964 }, 00:24:00.964 "queue_depth": 128, 00:24:00.964 "io_size": 4096, 00:24:00.964 "runtime": 27.965099, 00:24:00.964 "iops": 15206.597337631452, 00:24:00.964 "mibps": 59.40077085012286, 00:24:00.964 "io_failed": 0, 00:24:00.964 "io_timeout": 0, 00:24:00.964 "avg_latency_us": 8396.86565337422, 00:24:00.964 "min_latency_us": 89.93391304347826, 00:24:00.964 "max_latency_us": 3019898.88 00:24:00.964 } 00:24:00.964 ], 00:24:00.964 "core_count": 1 00:24:00.964 } 00:24:01.224 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1530315 00:24:01.225 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:01.225 [2024-11-15 11:05:20.651094] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:24:01.225 [2024-11-15 11:05:20.651153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1530315 ] 00:24:01.225 [2024-11-15 11:05:20.711427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.225 [2024-11-15 11:05:20.752615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.225 Running I/O for 90 seconds... 00:24:01.225 17664.00 IOPS, 69.00 MiB/s [2024-11-15T10:05:50.109Z] 17792.00 IOPS, 69.50 MiB/s [2024-11-15T10:05:50.109Z] 17815.33 IOPS, 69.59 MiB/s [2024-11-15T10:05:50.109Z] 17792.00 IOPS, 69.50 MiB/s [2024-11-15T10:05:50.109Z] 17779.00 IOPS, 69.45 MiB/s [2024-11-15T10:05:50.109Z] 17781.17 IOPS, 69.46 MiB/s [2024-11-15T10:05:50.109Z] 17770.00 IOPS, 69.41 MiB/s [2024-11-15T10:05:50.109Z] 17761.75 IOPS, 69.38 MiB/s [2024-11-15T10:05:50.109Z] 17739.11 IOPS, 69.29 MiB/s [2024-11-15T10:05:50.109Z] 17724.50 IOPS, 69.24 MiB/s [2024-11-15T10:05:50.109Z] 17715.73 IOPS, 69.20 MiB/s [2024-11-15T10:05:50.109Z] 17706.92 IOPS, 69.17 MiB/s [2024-11-15T10:05:50.109Z] [2024-11-15 11:05:34.102723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:54248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x186200 00:24:01.225 [2024-11-15 11:05:34.102762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.102795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.225 [2024-11-15 11:05:34.102803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.102814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x186200 00:24:01.225 [2024-11-15 11:05:34.102822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.102831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x186200 00:24:01.225 [2024-11-15 11:05:34.102838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.102848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x186200 00:24:01.225 [2024-11-15 11:05:34.102855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.102865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x186200 00:24:01.225 [2024-11-15 11:05:34.102872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.102881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:54288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x186200 00:24:01.225 [2024-11-15 11:05:34.102888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.102898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x186200 00:24:01.225 [2024-11-15 11:05:34.102904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.102915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x186200 00:24:01.225 [2024-11-15 11:05:34.102928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.102939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.225 [2024-11-15 11:05:34.102947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.102957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x186200 00:24:01.225 [2024-11-15 11:05:34.102964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.102975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.225 [2024-11-15 11:05:34.102983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.102992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.225 [2024-11-15 11:05:34.102999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.103009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.225 [2024-11-15 11:05:34.103017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.103030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.225 [2024-11-15 11:05:34.103036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.103046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.225 [2024-11-15 11:05:34.103052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.103062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.225 [2024-11-15 11:05:34.103068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.103078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.225 [2024-11-15 11:05:34.103085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.103094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.225 [2024-11-15 11:05:34.103101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.103110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.225 [2024-11-15 11:05:34.103117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.103126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.225 [2024-11-15 11:05:34.103132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.103144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.225 [2024-11-15 11:05:34.103150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.103160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:54320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x186200 00:24:01.225 [2024-11-15 11:05:34.103171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.103181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:54328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x186200 00:24:01.225 [2024-11-15 11:05:34.103187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.103197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:54336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x186200 00:24:01.225 [2024-11-15 11:05:34.103203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.103213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x186200 00:24:01.225 [2024-11-15 11:05:34.103220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.103229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.225 [2024-11-15 11:05:34.103236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.103245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.225 [2024-11-15 11:05:34.103252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.103261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.225 [2024-11-15 11:05:34.103269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.103278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.225 [2024-11-15 11:05:34.103285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.103295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.225 [2024-11-15 11:05:34.103302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:01.225 [2024-11-15 11:05:34.103312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.226 [2024-11-15 11:05:34.103319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.226 [2024-11-15 11:05:34.103336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.226 [2024-11-15 11:05:34.103359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.226 [2024-11-15 11:05:34.103375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.226 [2024-11-15 11:05:34.103391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.226 [2024-11-15 11:05:34.103407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.226 [2024-11-15 11:05:34.103422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:54360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:54368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:54376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:54384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:54392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:54408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:54416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:54424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:54440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:54448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:54472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:54480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:54504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:54512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:54520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:54528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.103822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.103829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.104173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.104184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.104215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:54552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x186200 00:24:01.226 [2024-11-15 11:05:34.104223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.104647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.226 [2024-11-15 11:05:34.104655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.104670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.226 [2024-11-15 11:05:34.104677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.104689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.226 [2024-11-15 11:05:34.104697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.104711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.226 [2024-11-15 11:05:34.104720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:01.226 [2024-11-15 11:05:34.104732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.227 [2024-11-15 11:05:34.104740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.104752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.227 [2024-11-15 11:05:34.104760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.104773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.227 [2024-11-15 11:05:34.104780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.104794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.227 [2024-11-15 11:05:34.104801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.104814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.227 [2024-11-15 11:05:34.104821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.104834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.227 [2024-11-15 11:05:34.104841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.104854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.227 [2024-11-15 11:05:34.104861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.104874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.227 [2024-11-15 11:05:34.104882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.104895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.227 [2024-11-15 11:05:34.104903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.104916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.227 [2024-11-15 11:05:34.104923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.104936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.227 [2024-11-15 11:05:34.104943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.104955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.227 [2024-11-15 11:05:34.104964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.104977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.227 [2024-11-15 11:05:34.104984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.104998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.227 [2024-11-15 11:05:34.105004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.227 [2024-11-15 11:05:34.105025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.227 [2024-11-15 11:05:34.105045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.227 [2024-11-15 11:05:34.105066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.227 [2024-11-15 11:05:34.105086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.227 [2024-11-15 11:05:34.105106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.227 [2024-11-15 11:05:34.105126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.227 [2024-11-15 11:05:34.105146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x186200 00:24:01.227 [2024-11-15 11:05:34.105172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:54568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x186200 00:24:01.227 [2024-11-15 11:05:34.105193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x186200 00:24:01.227 [2024-11-15 11:05:34.105215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x186200 00:24:01.227 [2024-11-15 11:05:34.105236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:54592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x186200 00:24:01.227 [2024-11-15 11:05:34.105257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:54600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x186200 00:24:01.227 [2024-11-15 11:05:34.105279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x186200 00:24:01.227 [2024-11-15 11:05:34.105300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x186200 00:24:01.227 [2024-11-15 11:05:34.105322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:54624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x186200 00:24:01.227 [2024-11-15 11:05:34.105343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.227 [2024-11-15 11:05:34.105364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:54632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x186200 00:24:01.227 [2024-11-15 11:05:34.105384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:54640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x186200 00:24:01.227 [2024-11-15 11:05:34.105405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:54648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x186200 00:24:01.227 [2024-11-15 11:05:34.105426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x186200 00:24:01.227 [2024-11-15 11:05:34.105447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x186200 00:24:01.227 [2024-11-15 11:05:34.105470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x186200 00:24:01.227 [2024-11-15 11:05:34.105490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:01.227 [2024-11-15 11:05:34.105503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x186200 00:24:01.227 [2024-11-15 11:05:34.105511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:34.105525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:54688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x186200 00:24:01.228 [2024-11-15 11:05:34.105532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:34.105546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:34.105553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:34.105567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:34.105574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:34.105587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x186200 00:24:01.228 [2024-11-15 11:05:34.105595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:34.105607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:54704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x186200 00:24:01.228 [2024-11-15 11:05:34.105615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:34.105628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:54712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x186200 00:24:01.228 [2024-11-15 11:05:34.105635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:34.105649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:34.105656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:34.105669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:34.105677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:34.105689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:34.105699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:34.105711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:34.105720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:34.105733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:34.105741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:34.105753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:34.105761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:34.105774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:34.105781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:34.105795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:34.105802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:34.105815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:34.105822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:34.105835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:34.105843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:34.105856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:34.105863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:34.105875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:34.105883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:34.105896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:34.105902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:34.105915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:34.105923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:34.105936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:34.105945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:34.105959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:34.105967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:01.228 16905.62 IOPS, 66.04 MiB/s [2024-11-15T10:05:50.112Z] 15698.07 IOPS, 61.32 MiB/s [2024-11-15T10:05:50.112Z] 14651.53 IOPS, 57.23 MiB/s [2024-11-15T10:05:50.112Z] 14383.25 IOPS, 56.18 MiB/s [2024-11-15T10:05:50.112Z] 14579.00 IOPS, 56.95 MiB/s [2024-11-15T10:05:50.112Z] 14710.50 IOPS, 57.46 MiB/s [2024-11-15T10:05:50.112Z] 14702.63 IOPS, 57.43 MiB/s [2024-11-15T10:05:50.112Z] 14690.15 IOPS, 57.38 MiB/s [2024-11-15T10:05:50.112Z] 14784.62 IOPS, 57.75 MiB/s [2024-11-15T10:05:50.112Z] 14919.32 IOPS, 58.28 MiB/s [2024-11-15T10:05:50.112Z] 15038.74 IOPS, 58.75 MiB/s [2024-11-15T10:05:50.112Z] 15035.00 IOPS, 58.73 MiB/s [2024-11-15T10:05:50.112Z] 15012.84 IOPS, 58.64 MiB/s [2024-11-15T10:05:50.112Z] [2024-11-15 11:05:47.402966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x186200 00:24:01.228 [2024-11-15 11:05:47.403003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:47.403032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x186200 00:24:01.228 [2024-11-15 11:05:47.403040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:47.403050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x186200 00:24:01.228 [2024-11-15 11:05:47.403058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:47.403068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x186200 00:24:01.228 [2024-11-15 11:05:47.403074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:47.403084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:47.403091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:47.403101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:47.403107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:47.403117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:47.403124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:47.403133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x186200 00:24:01.228 [2024-11-15 11:05:47.403141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:47.403592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:47.403603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:47.403623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x186200 00:24:01.228 [2024-11-15 11:05:47.403630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:47.403640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:47.403648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:47.403658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:47.403665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:47.403675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.228 [2024-11-15 11:05:47.403683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:01.228 [2024-11-15 11:05:47.403692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.229 [2024-11-15 11:05:47.403700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.403709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.229 [2024-11-15 11:05:47.403716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.403726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x186200 00:24:01.229 [2024-11-15 11:05:47.403733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.403743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x186200 00:24:01.229 [2024-11-15 11:05:47.403751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.403761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.229 [2024-11-15 11:05:47.403769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.403778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.229 [2024-11-15 11:05:47.403785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.403795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x186200 00:24:01.229 [2024-11-15 11:05:47.403802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.403813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x186200 00:24:01.229 [2024-11-15 11:05:47.403819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.403831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.229 [2024-11-15 11:05:47.403839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.403848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x186200 00:24:01.229 [2024-11-15 11:05:47.403856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.403866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x186200 00:24:01.229 [2024-11-15 11:05:47.403873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.403883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x186200 00:24:01.229 [2024-11-15 11:05:47.403890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.403900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x186200 00:24:01.229 [2024-11-15 11:05:47.403907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.403917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.229 [2024-11-15 11:05:47.403925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.403936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x186200 00:24:01.229 [2024-11-15 11:05:47.403943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.403953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x186200 00:24:01.229 [2024-11-15 11:05:47.403960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.403970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.229 [2024-11-15 11:05:47.403978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.403987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x186200 00:24:01.229 [2024-11-15 11:05:47.403995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.404006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.229 [2024-11-15 11:05:47.404013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.404023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x186200 00:24:01.229 [2024-11-15 11:05:47.404031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.404041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x186200 00:24:01.229 [2024-11-15 11:05:47.404048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.404058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.229 [2024-11-15 11:05:47.404065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.404075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x186200 00:24:01.229 [2024-11-15 11:05:47.404083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.404093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.229 [2024-11-15 11:05:47.404100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.404110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.229 [2024-11-15 11:05:47.404117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.404126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.229 [2024-11-15 11:05:47.404134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.404144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x186200 00:24:01.229 [2024-11-15 11:05:47.404152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.404320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x186200 00:24:01.229 [2024-11-15 11:05:47.404330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.404340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x186200 00:24:01.229 [2024-11-15 11:05:47.404348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.404358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:66776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x186200 00:24:01.229 [2024-11-15 11:05:47.404365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.404375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.229 [2024-11-15 11:05:47.404382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:01.229 [2024-11-15 11:05:47.404393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.229 [2024-11-15 11:05:47.404400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:01.230 [2024-11-15 11:05:47.404410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.230 [2024-11-15 11:05:47.404418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:01.230 [2024-11-15 11:05:47.404427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x186200 00:24:01.230 [2024-11-15 11:05:47.404435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:01.230 [2024-11-15 11:05:47.404445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.230 [2024-11-15 11:05:47.404452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:01.230 [2024-11-15 11:05:47.404461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.230 [2024-11-15 11:05:47.404468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:01.230 [2024-11-15 11:05:47.404478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.230 [2024-11-15 11:05:47.404484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:01.230 [2024-11-15 11:05:47.404494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.230 [2024-11-15 11:05:47.404501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:01.230 [2024-11-15 11:05:47.404510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.230 [2024-11-15 11:05:47.404517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:01.230 [2024-11-15 11:05:47.404526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x186200 00:24:01.230 [2024-11-15 11:05:47.404535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:01.230 [2024-11-15 11:05:47.404544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x186200 00:24:01.230 [2024-11-15 11:05:47.404551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:01.230 [2024-11-15 11:05:47.404560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x186200 00:24:01.230 [2024-11-15 11:05:47.404568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:01.230 [2024-11-15 11:05:47.404579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x186200 00:24:01.230 [2024-11-15 11:05:47.404586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:01.230 [2024-11-15 11:05:47.404597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.230 [2024-11-15 11:05:47.404604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:01.230 [2024-11-15 11:05:47.404614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x186200 00:24:01.230 [2024-11-15 11:05:47.404622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:01.230 [2024-11-15 11:05:47.404631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.230 [2024-11-15 11:05:47.404638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:01.230 [2024-11-15 11:05:47.404648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.230 [2024-11-15 11:05:47.404672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:01.230 [2024-11-15 11:05:47.404682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x186200 00:24:01.230 [2024-11-15 11:05:47.404690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:01.230 [2024-11-15 11:05:47.404700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.230 [2024-11-15 11:05:47.404707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:01.230 [2024-11-15 11:05:47.404717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.230 [2024-11-15 11:05:47.404724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:01.230 [2024-11-15 11:05:47.404734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x186200 00:24:01.230 [2024-11-15 11:05:47.404742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:01.230 15027.27 IOPS, 58.70 MiB/s [2024-11-15T10:05:50.114Z] 15124.96 IOPS, 59.08 MiB/s [2024-11-15T10:05:50.114Z] Received shutdown signal, test time was about 27.965758 seconds 00:24:01.230 00:24:01.230 Latency(us) 00:24:01.230 [2024-11-15T10:05:50.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.230 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:01.230 Verification LBA range: start 0x0 length 0x4000 00:24:01.230 Nvme0n1 : 27.97 15206.60 59.40 0.00 0.00 8396.87 89.93 3019898.88 00:24:01.230 [2024-11-15T10:05:50.114Z] =================================================================================================================== 00:24:01.230 [2024-11-15T10:05:50.114Z] Total : 15206.60 59.40 0.00 0.00 8396.87 89.93 3019898.88 00:24:01.230 11:05:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:01.230 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:01.230 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:01.230 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:01.230 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:01.230 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:01.230 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:01.230 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:01.230 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:01.230 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:01.230 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:01.230 rmmod nvme_rdma 00:24:01.230 rmmod nvme_fabrics 00:24:01.230 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:01.230 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:01.230 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:01.230 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1530053 ']' 00:24:01.230 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1530053 00:24:01.230 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 1530053 ']' 00:24:01.230 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 1530053 00:24:01.230 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:24:01.488 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:01.488 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1530053 00:24:01.488 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:01.488 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:01.488 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1530053' 00:24:01.488 killing process with pid 1530053 00:24:01.488 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 1530053 00:24:01.488 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 1530053 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:24:01.745 00:24:01.745 real 0m37.153s 00:24:01.745 user 1m49.539s 00:24:01.745 sys 0m7.641s 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:01.745 ************************************ 00:24:01.745 END TEST nvmf_host_multipath_status 00:24:01.745 ************************************ 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.745 ************************************ 00:24:01.745 START TEST nvmf_discovery_remove_ifc 00:24:01.745 ************************************ 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:24:01.745 * Looking for test storage... 00:24:01.745 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:01.745 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:01.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.746 --rc genhtml_branch_coverage=1 00:24:01.746 --rc genhtml_function_coverage=1 00:24:01.746 --rc genhtml_legend=1 00:24:01.746 --rc geninfo_all_blocks=1 00:24:01.746 --rc geninfo_unexecuted_blocks=1 00:24:01.746 00:24:01.746 ' 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:01.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.746 --rc genhtml_branch_coverage=1 00:24:01.746 --rc genhtml_function_coverage=1 00:24:01.746 --rc genhtml_legend=1 00:24:01.746 --rc geninfo_all_blocks=1 00:24:01.746 --rc geninfo_unexecuted_blocks=1 00:24:01.746 00:24:01.746 ' 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:01.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.746 --rc genhtml_branch_coverage=1 00:24:01.746 --rc genhtml_function_coverage=1 00:24:01.746 --rc genhtml_legend=1 00:24:01.746 --rc geninfo_all_blocks=1 00:24:01.746 --rc geninfo_unexecuted_blocks=1 00:24:01.746 00:24:01.746 ' 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:01.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.746 --rc genhtml_branch_coverage=1 00:24:01.746 --rc genhtml_function_coverage=1 00:24:01.746 --rc genhtml_legend=1 00:24:01.746 --rc geninfo_all_blocks=1 00:24:01.746 --rc geninfo_unexecuted_blocks=1 00:24:01.746 00:24:01.746 ' 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.746 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:02.004 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:24:02.004 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:24:02.004 00:24:02.004 real 0m0.189s 00:24:02.004 user 0m0.118s 00:24:02.004 sys 0m0.084s 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:02.004 ************************************ 00:24:02.004 END TEST nvmf_discovery_remove_ifc 00:24:02.004 ************************************ 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.004 ************************************ 00:24:02.004 START TEST nvmf_identify_kernel_target 00:24:02.004 ************************************ 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:24:02.004 * Looking for test storage... 00:24:02.004 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:02.004 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:02.005 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:02.005 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:02.005 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:02.005 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:02.005 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:02.005 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:02.005 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.005 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:02.005 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:02.005 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:02.005 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:02.005 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:02.005 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:02.005 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:02.005 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:02.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.263 --rc genhtml_branch_coverage=1 00:24:02.263 --rc genhtml_function_coverage=1 00:24:02.263 --rc genhtml_legend=1 00:24:02.263 --rc geninfo_all_blocks=1 00:24:02.263 --rc geninfo_unexecuted_blocks=1 00:24:02.263 00:24:02.263 ' 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:02.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.263 --rc genhtml_branch_coverage=1 00:24:02.263 --rc genhtml_function_coverage=1 00:24:02.263 --rc genhtml_legend=1 00:24:02.263 --rc geninfo_all_blocks=1 00:24:02.263 --rc geninfo_unexecuted_blocks=1 00:24:02.263 00:24:02.263 ' 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:02.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.263 --rc genhtml_branch_coverage=1 00:24:02.263 --rc genhtml_function_coverage=1 00:24:02.263 --rc genhtml_legend=1 00:24:02.263 --rc geninfo_all_blocks=1 00:24:02.263 --rc geninfo_unexecuted_blocks=1 00:24:02.263 00:24:02.263 ' 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:02.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.263 --rc genhtml_branch_coverage=1 00:24:02.263 --rc genhtml_function_coverage=1 00:24:02.263 --rc genhtml_legend=1 00:24:02.263 --rc geninfo_all_blocks=1 00:24:02.263 --rc geninfo_unexecuted_blocks=1 00:24:02.263 00:24:02.263 ' 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.263 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:02.264 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:02.264 11:05:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.519 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:24:07.520 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:24:07.520 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:24:07.520 Found net devices under 0000:af:00.0: mlx_0_0 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:24:07.520 Found net devices under 0000:af:00.1: mlx_0_1 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # rdma_device_init 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:24:07.520 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:24:07.520 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:07.520 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:24:07.520 altname enp175s0f0np0 00:24:07.520 altname ens801f0np0 00:24:07.520 inet 192.168.100.8/24 scope global mlx_0_0 00:24:07.521 valid_lft forever preferred_lft forever 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:24:07.521 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:07.521 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:24:07.521 altname enp175s0f1np1 00:24:07.521 altname ens801f1np1 00:24:07.521 inet 192.168.100.9/24 scope global mlx_0_1 00:24:07.521 valid_lft forever preferred_lft forever 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:24:07.521 192.168.100.9' 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:24:07.521 192.168.100.9' 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # head -n 1 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:24:07.521 192.168.100.9' 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # tail -n +2 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # head -n 1 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:07.521 11:05:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:24:10.051 Waiting for block devices as requested 00:24:10.051 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:10.051 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:10.051 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:10.051 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:10.051 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:10.051 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:10.051 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:10.051 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:10.307 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:10.307 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:10.307 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:10.564 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:10.564 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:10.564 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:10.564 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:10.820 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:10.820 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:10.820 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:10.820 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:10.820 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:10.820 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:10.820 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:10.820 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:10.820 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:10.820 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:10.820 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:11.077 No valid GPT data, bailing 00:24:11.077 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:11.077 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:11.077 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:11.077 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:11.077 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:11.077 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:11.077 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:11.077 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:11.077 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:11.077 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:11.077 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:11.077 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:24:11.077 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:24:11.077 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo rdma 00:24:11.077 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:24:11.077 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:24:11.077 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:11.077 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:24:11.077 00:24:11.077 Discovery Log Number of Records 2, Generation counter 2 00:24:11.077 =====Discovery Log Entry 0====== 00:24:11.077 trtype: rdma 00:24:11.077 adrfam: ipv4 00:24:11.077 subtype: current discovery subsystem 00:24:11.077 treq: not specified, sq flow control disable supported 00:24:11.077 portid: 1 00:24:11.077 trsvcid: 4420 00:24:11.077 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:11.077 traddr: 192.168.100.8 00:24:11.077 eflags: none 00:24:11.077 rdma_prtype: not specified 00:24:11.077 rdma_qptype: connected 00:24:11.077 rdma_cms: rdma-cm 00:24:11.077 rdma_pkey: 0x0000 00:24:11.077 =====Discovery Log Entry 1====== 00:24:11.077 trtype: rdma 00:24:11.077 adrfam: ipv4 00:24:11.077 subtype: nvme subsystem 00:24:11.077 treq: not specified, sq flow control disable supported 00:24:11.077 portid: 1 00:24:11.077 trsvcid: 4420 00:24:11.077 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:11.077 traddr: 192.168.100.8 00:24:11.077 eflags: none 00:24:11.077 rdma_prtype: not specified 00:24:11.077 rdma_qptype: connected 00:24:11.077 rdma_cms: rdma-cm 00:24:11.077 rdma_pkey: 0x0000 00:24:11.077 11:05:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:24:11.077 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:11.337 ===================================================== 00:24:11.337 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:11.337 ===================================================== 00:24:11.337 Controller Capabilities/Features 00:24:11.337 ================================ 00:24:11.337 Vendor ID: 0000 00:24:11.337 Subsystem Vendor ID: 0000 00:24:11.337 Serial Number: 7101e95a20d0d0a9034d 00:24:11.337 Model Number: Linux 00:24:11.337 Firmware Version: 6.8.9-20 00:24:11.337 Recommended Arb Burst: 0 00:24:11.337 IEEE OUI Identifier: 00 00 00 00:24:11.337 Multi-path I/O 00:24:11.337 May have multiple subsystem ports: No 00:24:11.337 May have multiple controllers: No 00:24:11.337 Associated with SR-IOV VF: No 00:24:11.337 Max Data Transfer Size: Unlimited 00:24:11.337 Max Number of Namespaces: 0 00:24:11.337 Max Number of I/O Queues: 1024 00:24:11.337 NVMe Specification Version (VS): 1.3 00:24:11.337 NVMe Specification Version (Identify): 1.3 00:24:11.337 Maximum Queue Entries: 128 00:24:11.337 Contiguous Queues Required: No 00:24:11.337 Arbitration Mechanisms Supported 00:24:11.337 Weighted Round Robin: Not Supported 00:24:11.337 Vendor Specific: Not Supported 00:24:11.337 Reset Timeout: 7500 ms 00:24:11.337 Doorbell Stride: 4 bytes 00:24:11.337 NVM Subsystem Reset: Not Supported 00:24:11.337 Command Sets Supported 00:24:11.337 NVM Command Set: Supported 00:24:11.337 Boot Partition: Not Supported 00:24:11.337 Memory Page Size Minimum: 4096 bytes 00:24:11.337 Memory Page Size Maximum: 4096 bytes 00:24:11.337 Persistent Memory Region: Not Supported 00:24:11.337 Optional Asynchronous Events Supported 00:24:11.337 Namespace Attribute Notices: Not Supported 00:24:11.337 Firmware Activation Notices: Not Supported 00:24:11.337 ANA Change Notices: Not Supported 00:24:11.337 PLE Aggregate Log Change Notices: Not Supported 00:24:11.337 LBA Status Info Alert Notices: Not Supported 00:24:11.337 EGE Aggregate Log Change Notices: Not Supported 00:24:11.337 Normal NVM Subsystem Shutdown event: Not Supported 00:24:11.337 Zone Descriptor Change Notices: Not Supported 00:24:11.337 Discovery Log Change Notices: Supported 00:24:11.337 Controller Attributes 00:24:11.337 128-bit Host Identifier: Not Supported 00:24:11.337 Non-Operational Permissive Mode: Not Supported 00:24:11.337 NVM Sets: Not Supported 00:24:11.337 Read Recovery Levels: Not Supported 00:24:11.337 Endurance Groups: Not Supported 00:24:11.337 Predictable Latency Mode: Not Supported 00:24:11.337 Traffic Based Keep ALive: Not Supported 00:24:11.337 Namespace Granularity: Not Supported 00:24:11.337 SQ Associations: Not Supported 00:24:11.337 UUID List: Not Supported 00:24:11.337 Multi-Domain Subsystem: Not Supported 00:24:11.337 Fixed Capacity Management: Not Supported 00:24:11.337 Variable Capacity Management: Not Supported 00:24:11.337 Delete Endurance Group: Not Supported 00:24:11.337 Delete NVM Set: Not Supported 00:24:11.337 Extended LBA Formats Supported: Not Supported 00:24:11.337 Flexible Data Placement Supported: Not Supported 00:24:11.337 00:24:11.337 Controller Memory Buffer Support 00:24:11.337 ================================ 00:24:11.337 Supported: No 00:24:11.337 00:24:11.337 Persistent Memory Region Support 00:24:11.337 ================================ 00:24:11.337 Supported: No 00:24:11.337 00:24:11.337 Admin Command Set Attributes 00:24:11.337 ============================ 00:24:11.337 Security Send/Receive: Not Supported 00:24:11.337 Format NVM: Not Supported 00:24:11.337 Firmware Activate/Download: Not Supported 00:24:11.337 Namespace Management: Not Supported 00:24:11.337 Device Self-Test: Not Supported 00:24:11.337 Directives: Not Supported 00:24:11.337 NVMe-MI: Not Supported 00:24:11.337 Virtualization Management: Not Supported 00:24:11.337 Doorbell Buffer Config: Not Supported 00:24:11.337 Get LBA Status Capability: Not Supported 00:24:11.337 Command & Feature Lockdown Capability: Not Supported 00:24:11.337 Abort Command Limit: 1 00:24:11.337 Async Event Request Limit: 1 00:24:11.337 Number of Firmware Slots: N/A 00:24:11.337 Firmware Slot 1 Read-Only: N/A 00:24:11.337 Firmware Activation Without Reset: N/A 00:24:11.337 Multiple Update Detection Support: N/A 00:24:11.337 Firmware Update Granularity: No Information Provided 00:24:11.337 Per-Namespace SMART Log: No 00:24:11.337 Asymmetric Namespace Access Log Page: Not Supported 00:24:11.337 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:11.337 Command Effects Log Page: Not Supported 00:24:11.337 Get Log Page Extended Data: Supported 00:24:11.337 Telemetry Log Pages: Not Supported 00:24:11.337 Persistent Event Log Pages: Not Supported 00:24:11.337 Supported Log Pages Log Page: May Support 00:24:11.337 Commands Supported & Effects Log Page: Not Supported 00:24:11.337 Feature Identifiers & Effects Log Page:May Support 00:24:11.337 NVMe-MI Commands & Effects Log Page: May Support 00:24:11.337 Data Area 4 for Telemetry Log: Not Supported 00:24:11.337 Error Log Page Entries Supported: 1 00:24:11.337 Keep Alive: Not Supported 00:24:11.337 00:24:11.337 NVM Command Set Attributes 00:24:11.337 ========================== 00:24:11.337 Submission Queue Entry Size 00:24:11.337 Max: 1 00:24:11.337 Min: 1 00:24:11.337 Completion Queue Entry Size 00:24:11.337 Max: 1 00:24:11.337 Min: 1 00:24:11.337 Number of Namespaces: 0 00:24:11.337 Compare Command: Not Supported 00:24:11.337 Write Uncorrectable Command: Not Supported 00:24:11.337 Dataset Management Command: Not Supported 00:24:11.337 Write Zeroes Command: Not Supported 00:24:11.337 Set Features Save Field: Not Supported 00:24:11.338 Reservations: Not Supported 00:24:11.338 Timestamp: Not Supported 00:24:11.338 Copy: Not Supported 00:24:11.338 Volatile Write Cache: Not Present 00:24:11.338 Atomic Write Unit (Normal): 1 00:24:11.338 Atomic Write Unit (PFail): 1 00:24:11.338 Atomic Compare & Write Unit: 1 00:24:11.338 Fused Compare & Write: Not Supported 00:24:11.338 Scatter-Gather List 00:24:11.338 SGL Command Set: Supported 00:24:11.338 SGL Keyed: Supported 00:24:11.338 SGL Bit Bucket Descriptor: Not Supported 00:24:11.338 SGL Metadata Pointer: Not Supported 00:24:11.338 Oversized SGL: Not Supported 00:24:11.338 SGL Metadata Address: Not Supported 00:24:11.338 SGL Offset: Supported 00:24:11.338 Transport SGL Data Block: Not Supported 00:24:11.338 Replay Protected Memory Block: Not Supported 00:24:11.338 00:24:11.338 Firmware Slot Information 00:24:11.338 ========================= 00:24:11.338 Active slot: 0 00:24:11.338 00:24:11.338 00:24:11.338 Error Log 00:24:11.338 ========= 00:24:11.338 00:24:11.338 Active Namespaces 00:24:11.338 ================= 00:24:11.338 Discovery Log Page 00:24:11.338 ================== 00:24:11.338 Generation Counter: 2 00:24:11.338 Number of Records: 2 00:24:11.338 Record Format: 0 00:24:11.338 00:24:11.338 Discovery Log Entry 0 00:24:11.338 ---------------------- 00:24:11.338 Transport Type: 1 (RDMA) 00:24:11.338 Address Family: 1 (IPv4) 00:24:11.338 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:11.338 Entry Flags: 00:24:11.338 Duplicate Returned Information: 0 00:24:11.338 Explicit Persistent Connection Support for Discovery: 0 00:24:11.338 Transport Requirements: 00:24:11.338 Secure Channel: Not Specified 00:24:11.338 Port ID: 1 (0x0001) 00:24:11.338 Controller ID: 65535 (0xffff) 00:24:11.338 Admin Max SQ Size: 32 00:24:11.338 Transport Service Identifier: 4420 00:24:11.338 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:11.338 Transport Address: 192.168.100.8 00:24:11.338 Transport Specific Address Subtype - RDMA 00:24:11.338 RDMA QP Service Type: 1 (Reliable Connected) 00:24:11.338 RDMA Provider Type: 1 (No provider specified) 00:24:11.338 RDMA CM Service: 1 (RDMA_CM) 00:24:11.338 Discovery Log Entry 1 00:24:11.338 ---------------------- 00:24:11.338 Transport Type: 1 (RDMA) 00:24:11.338 Address Family: 1 (IPv4) 00:24:11.338 Subsystem Type: 2 (NVM Subsystem) 00:24:11.338 Entry Flags: 00:24:11.338 Duplicate Returned Information: 0 00:24:11.338 Explicit Persistent Connection Support for Discovery: 0 00:24:11.338 Transport Requirements: 00:24:11.338 Secure Channel: Not Specified 00:24:11.338 Port ID: 1 (0x0001) 00:24:11.338 Controller ID: 65535 (0xffff) 00:24:11.338 Admin Max SQ Size: 32 00:24:11.338 Transport Service Identifier: 4420 00:24:11.338 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:11.338 Transport Address: 192.168.100.8 00:24:11.338 Transport Specific Address Subtype - RDMA 00:24:11.338 RDMA QP Service Type: 1 (Reliable Connected) 00:24:11.338 RDMA Provider Type: 1 (No provider specified) 00:24:11.338 RDMA CM Service: 1 (RDMA_CM) 00:24:11.338 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:11.338 get_feature(0x01) failed 00:24:11.338 get_feature(0x02) failed 00:24:11.338 get_feature(0x04) failed 00:24:11.338 ===================================================== 00:24:11.338 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:24:11.338 ===================================================== 00:24:11.338 Controller Capabilities/Features 00:24:11.338 ================================ 00:24:11.338 Vendor ID: 0000 00:24:11.338 Subsystem Vendor ID: 0000 00:24:11.338 Serial Number: 34ec123254c3af8ad914 00:24:11.338 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:11.338 Firmware Version: 6.8.9-20 00:24:11.338 Recommended Arb Burst: 6 00:24:11.338 IEEE OUI Identifier: 00 00 00 00:24:11.338 Multi-path I/O 00:24:11.338 May have multiple subsystem ports: Yes 00:24:11.338 May have multiple controllers: Yes 00:24:11.338 Associated with SR-IOV VF: No 00:24:11.338 Max Data Transfer Size: 1048576 00:24:11.338 Max Number of Namespaces: 1024 00:24:11.338 Max Number of I/O Queues: 128 00:24:11.338 NVMe Specification Version (VS): 1.3 00:24:11.338 NVMe Specification Version (Identify): 1.3 00:24:11.338 Maximum Queue Entries: 128 00:24:11.338 Contiguous Queues Required: No 00:24:11.338 Arbitration Mechanisms Supported 00:24:11.338 Weighted Round Robin: Not Supported 00:24:11.338 Vendor Specific: Not Supported 00:24:11.338 Reset Timeout: 7500 ms 00:24:11.338 Doorbell Stride: 4 bytes 00:24:11.338 NVM Subsystem Reset: Not Supported 00:24:11.338 Command Sets Supported 00:24:11.338 NVM Command Set: Supported 00:24:11.338 Boot Partition: Not Supported 00:24:11.338 Memory Page Size Minimum: 4096 bytes 00:24:11.338 Memory Page Size Maximum: 4096 bytes 00:24:11.338 Persistent Memory Region: Not Supported 00:24:11.338 Optional Asynchronous Events Supported 00:24:11.338 Namespace Attribute Notices: Supported 00:24:11.338 Firmware Activation Notices: Not Supported 00:24:11.338 ANA Change Notices: Supported 00:24:11.338 PLE Aggregate Log Change Notices: Not Supported 00:24:11.338 LBA Status Info Alert Notices: Not Supported 00:24:11.338 EGE Aggregate Log Change Notices: Not Supported 00:24:11.338 Normal NVM Subsystem Shutdown event: Not Supported 00:24:11.338 Zone Descriptor Change Notices: Not Supported 00:24:11.338 Discovery Log Change Notices: Not Supported 00:24:11.338 Controller Attributes 00:24:11.338 128-bit Host Identifier: Supported 00:24:11.338 Non-Operational Permissive Mode: Not Supported 00:24:11.338 NVM Sets: Not Supported 00:24:11.338 Read Recovery Levels: Not Supported 00:24:11.338 Endurance Groups: Not Supported 00:24:11.338 Predictable Latency Mode: Not Supported 00:24:11.338 Traffic Based Keep ALive: Supported 00:24:11.338 Namespace Granularity: Not Supported 00:24:11.338 SQ Associations: Not Supported 00:24:11.338 UUID List: Not Supported 00:24:11.338 Multi-Domain Subsystem: Not Supported 00:24:11.338 Fixed Capacity Management: Not Supported 00:24:11.338 Variable Capacity Management: Not Supported 00:24:11.338 Delete Endurance Group: Not Supported 00:24:11.338 Delete NVM Set: Not Supported 00:24:11.338 Extended LBA Formats Supported: Not Supported 00:24:11.338 Flexible Data Placement Supported: Not Supported 00:24:11.338 00:24:11.338 Controller Memory Buffer Support 00:24:11.338 ================================ 00:24:11.338 Supported: No 00:24:11.338 00:24:11.338 Persistent Memory Region Support 00:24:11.338 ================================ 00:24:11.338 Supported: No 00:24:11.338 00:24:11.338 Admin Command Set Attributes 00:24:11.338 ============================ 00:24:11.338 Security Send/Receive: Not Supported 00:24:11.338 Format NVM: Not Supported 00:24:11.338 Firmware Activate/Download: Not Supported 00:24:11.338 Namespace Management: Not Supported 00:24:11.338 Device Self-Test: Not Supported 00:24:11.338 Directives: Not Supported 00:24:11.338 NVMe-MI: Not Supported 00:24:11.338 Virtualization Management: Not Supported 00:24:11.338 Doorbell Buffer Config: Not Supported 00:24:11.338 Get LBA Status Capability: Not Supported 00:24:11.338 Command & Feature Lockdown Capability: Not Supported 00:24:11.338 Abort Command Limit: 4 00:24:11.338 Async Event Request Limit: 4 00:24:11.338 Number of Firmware Slots: N/A 00:24:11.338 Firmware Slot 1 Read-Only: N/A 00:24:11.338 Firmware Activation Without Reset: N/A 00:24:11.338 Multiple Update Detection Support: N/A 00:24:11.338 Firmware Update Granularity: No Information Provided 00:24:11.338 Per-Namespace SMART Log: Yes 00:24:11.338 Asymmetric Namespace Access Log Page: Supported 00:24:11.338 ANA Transition Time : 10 sec 00:24:11.338 00:24:11.338 Asymmetric Namespace Access Capabilities 00:24:11.338 ANA Optimized State : Supported 00:24:11.338 ANA Non-Optimized State : Supported 00:24:11.338 ANA Inaccessible State : Supported 00:24:11.338 ANA Persistent Loss State : Supported 00:24:11.338 ANA Change State : Supported 00:24:11.338 ANAGRPID is not changed : No 00:24:11.338 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:11.338 00:24:11.338 ANA Group Identifier Maximum : 128 00:24:11.338 Number of ANA Group Identifiers : 128 00:24:11.338 Max Number of Allowed Namespaces : 1024 00:24:11.338 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:11.338 Command Effects Log Page: Supported 00:24:11.338 Get Log Page Extended Data: Supported 00:24:11.338 Telemetry Log Pages: Not Supported 00:24:11.338 Persistent Event Log Pages: Not Supported 00:24:11.338 Supported Log Pages Log Page: May Support 00:24:11.339 Commands Supported & Effects Log Page: Not Supported 00:24:11.339 Feature Identifiers & Effects Log Page:May Support 00:24:11.339 NVMe-MI Commands & Effects Log Page: May Support 00:24:11.339 Data Area 4 for Telemetry Log: Not Supported 00:24:11.339 Error Log Page Entries Supported: 128 00:24:11.339 Keep Alive: Supported 00:24:11.339 Keep Alive Granularity: 1000 ms 00:24:11.339 00:24:11.339 NVM Command Set Attributes 00:24:11.339 ========================== 00:24:11.339 Submission Queue Entry Size 00:24:11.339 Max: 64 00:24:11.339 Min: 64 00:24:11.339 Completion Queue Entry Size 00:24:11.339 Max: 16 00:24:11.339 Min: 16 00:24:11.339 Number of Namespaces: 1024 00:24:11.339 Compare Command: Not Supported 00:24:11.339 Write Uncorrectable Command: Not Supported 00:24:11.339 Dataset Management Command: Supported 00:24:11.339 Write Zeroes Command: Supported 00:24:11.339 Set Features Save Field: Not Supported 00:24:11.339 Reservations: Not Supported 00:24:11.339 Timestamp: Not Supported 00:24:11.339 Copy: Not Supported 00:24:11.339 Volatile Write Cache: Present 00:24:11.339 Atomic Write Unit (Normal): 1 00:24:11.339 Atomic Write Unit (PFail): 1 00:24:11.339 Atomic Compare & Write Unit: 1 00:24:11.339 Fused Compare & Write: Not Supported 00:24:11.339 Scatter-Gather List 00:24:11.339 SGL Command Set: Supported 00:24:11.339 SGL Keyed: Supported 00:24:11.339 SGL Bit Bucket Descriptor: Not Supported 00:24:11.339 SGL Metadata Pointer: Not Supported 00:24:11.339 Oversized SGL: Not Supported 00:24:11.339 SGL Metadata Address: Not Supported 00:24:11.339 SGL Offset: Supported 00:24:11.339 Transport SGL Data Block: Not Supported 00:24:11.339 Replay Protected Memory Block: Not Supported 00:24:11.339 00:24:11.339 Firmware Slot Information 00:24:11.339 ========================= 00:24:11.339 Active slot: 0 00:24:11.339 00:24:11.339 Asymmetric Namespace Access 00:24:11.339 =========================== 00:24:11.339 Change Count : 0 00:24:11.339 Number of ANA Group Descriptors : 1 00:24:11.339 ANA Group Descriptor : 0 00:24:11.339 ANA Group ID : 1 00:24:11.339 Number of NSID Values : 1 00:24:11.339 Change Count : 0 00:24:11.339 ANA State : 1 00:24:11.339 Namespace Identifier : 1 00:24:11.339 00:24:11.339 Commands Supported and Effects 00:24:11.339 ============================== 00:24:11.339 Admin Commands 00:24:11.339 -------------- 00:24:11.339 Get Log Page (02h): Supported 00:24:11.339 Identify (06h): Supported 00:24:11.339 Abort (08h): Supported 00:24:11.339 Set Features (09h): Supported 00:24:11.339 Get Features (0Ah): Supported 00:24:11.339 Asynchronous Event Request (0Ch): Supported 00:24:11.339 Keep Alive (18h): Supported 00:24:11.339 I/O Commands 00:24:11.339 ------------ 00:24:11.339 Flush (00h): Supported 00:24:11.339 Write (01h): Supported LBA-Change 00:24:11.339 Read (02h): Supported 00:24:11.339 Write Zeroes (08h): Supported LBA-Change 00:24:11.339 Dataset Management (09h): Supported 00:24:11.339 00:24:11.339 Error Log 00:24:11.339 ========= 00:24:11.339 Entry: 0 00:24:11.339 Error Count: 0x3 00:24:11.339 Submission Queue Id: 0x0 00:24:11.339 Command Id: 0x5 00:24:11.339 Phase Bit: 0 00:24:11.339 Status Code: 0x2 00:24:11.339 Status Code Type: 0x0 00:24:11.339 Do Not Retry: 1 00:24:11.339 Error Location: 0x28 00:24:11.339 LBA: 0x0 00:24:11.339 Namespace: 0x0 00:24:11.339 Vendor Log Page: 0x0 00:24:11.339 ----------- 00:24:11.339 Entry: 1 00:24:11.339 Error Count: 0x2 00:24:11.339 Submission Queue Id: 0x0 00:24:11.339 Command Id: 0x5 00:24:11.339 Phase Bit: 0 00:24:11.339 Status Code: 0x2 00:24:11.339 Status Code Type: 0x0 00:24:11.339 Do Not Retry: 1 00:24:11.339 Error Location: 0x28 00:24:11.339 LBA: 0x0 00:24:11.339 Namespace: 0x0 00:24:11.339 Vendor Log Page: 0x0 00:24:11.339 ----------- 00:24:11.339 Entry: 2 00:24:11.339 Error Count: 0x1 00:24:11.339 Submission Queue Id: 0x0 00:24:11.339 Command Id: 0x0 00:24:11.339 Phase Bit: 0 00:24:11.339 Status Code: 0x2 00:24:11.339 Status Code Type: 0x0 00:24:11.339 Do Not Retry: 1 00:24:11.339 Error Location: 0x28 00:24:11.339 LBA: 0x0 00:24:11.339 Namespace: 0x0 00:24:11.339 Vendor Log Page: 0x0 00:24:11.339 00:24:11.339 Number of Queues 00:24:11.339 ================ 00:24:11.339 Number of I/O Submission Queues: 128 00:24:11.339 Number of I/O Completion Queues: 128 00:24:11.339 00:24:11.339 ZNS Specific Controller Data 00:24:11.339 ============================ 00:24:11.339 Zone Append Size Limit: 0 00:24:11.339 00:24:11.339 00:24:11.339 Active Namespaces 00:24:11.339 ================= 00:24:11.339 get_feature(0x05) failed 00:24:11.339 Namespace ID:1 00:24:11.339 Command Set Identifier: NVM (00h) 00:24:11.339 Deallocate: Supported 00:24:11.339 Deallocated/Unwritten Error: Not Supported 00:24:11.339 Deallocated Read Value: Unknown 00:24:11.339 Deallocate in Write Zeroes: Not Supported 00:24:11.339 Deallocated Guard Field: 0xFFFF 00:24:11.339 Flush: Supported 00:24:11.339 Reservation: Not Supported 00:24:11.339 Namespace Sharing Capabilities: Multiple Controllers 00:24:11.339 Size (in LBAs): 1953525168 (931GiB) 00:24:11.339 Capacity (in LBAs): 1953525168 (931GiB) 00:24:11.339 Utilization (in LBAs): 1953525168 (931GiB) 00:24:11.339 UUID: dcd295b9-2ec2-45b4-bf79-e25f69d8b0ad 00:24:11.339 Thin Provisioning: Not Supported 00:24:11.339 Per-NS Atomic Units: Yes 00:24:11.339 Atomic Boundary Size (Normal): 0 00:24:11.339 Atomic Boundary Size (PFail): 0 00:24:11.339 Atomic Boundary Offset: 0 00:24:11.339 NGUID/EUI64 Never Reused: No 00:24:11.339 ANA group ID: 1 00:24:11.339 Namespace Write Protected: No 00:24:11.339 Number of LBA Formats: 1 00:24:11.339 Current LBA Format: LBA Format #00 00:24:11.339 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:11.339 00:24:11.339 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:11.339 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:11.339 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:11.339 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:11.339 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:11.339 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:11.339 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:11.339 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:11.339 rmmod nvme_rdma 00:24:11.339 rmmod nvme_fabrics 00:24:11.597 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:11.597 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:11.597 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:11.597 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:24:11.597 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:11.597 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:24:11.597 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:11.597 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:11.597 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:24:11.597 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:11.597 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:11.597 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:11.597 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:11.597 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:11.597 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:24:11.597 11:06:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:24:14.872 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:14.872 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:14.872 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:14.872 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:14.872 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:14.872 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:14.872 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:14.872 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:14.872 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:14.872 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:14.872 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:14.872 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:14.872 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:14.872 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:14.872 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:14.872 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:15.131 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:15.388 00:24:15.388 real 0m13.396s 00:24:15.388 user 0m3.832s 00:24:15.388 sys 0m7.863s 00:24:15.388 11:06:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:15.388 11:06:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.388 ************************************ 00:24:15.388 END TEST nvmf_identify_kernel_target 00:24:15.388 ************************************ 00:24:15.388 11:06:04 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:24:15.388 11:06:04 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:15.388 11:06:04 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:15.388 11:06:04 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.388 ************************************ 00:24:15.388 START TEST nvmf_auth_host 00:24:15.388 ************************************ 00:24:15.388 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:24:15.388 * Looking for test storage... 00:24:15.388 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:15.388 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:15.388 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:15.388 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:15.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.646 --rc genhtml_branch_coverage=1 00:24:15.646 --rc genhtml_function_coverage=1 00:24:15.646 --rc genhtml_legend=1 00:24:15.646 --rc geninfo_all_blocks=1 00:24:15.646 --rc geninfo_unexecuted_blocks=1 00:24:15.646 00:24:15.646 ' 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:15.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.646 --rc genhtml_branch_coverage=1 00:24:15.646 --rc genhtml_function_coverage=1 00:24:15.646 --rc genhtml_legend=1 00:24:15.646 --rc geninfo_all_blocks=1 00:24:15.646 --rc geninfo_unexecuted_blocks=1 00:24:15.646 00:24:15.646 ' 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:15.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.646 --rc genhtml_branch_coverage=1 00:24:15.646 --rc genhtml_function_coverage=1 00:24:15.646 --rc genhtml_legend=1 00:24:15.646 --rc geninfo_all_blocks=1 00:24:15.646 --rc geninfo_unexecuted_blocks=1 00:24:15.646 00:24:15.646 ' 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:15.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.646 --rc genhtml_branch_coverage=1 00:24:15.646 --rc genhtml_function_coverage=1 00:24:15.646 --rc genhtml_legend=1 00:24:15.646 --rc geninfo_all_blocks=1 00:24:15.646 --rc geninfo_unexecuted_blocks=1 00:24:15.646 00:24:15.646 ' 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:24:15.646 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:15.647 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:15.647 11:06:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:24:20.909 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:24:20.909 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:24:20.909 Found net devices under 0000:af:00.0: mlx_0_0 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:24:20.909 Found net devices under 0000:af:00.1: mlx_0_1 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # rdma_device_init 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:24:20.909 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:24:20.909 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:20.909 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:24:20.909 altname enp175s0f0np0 00:24:20.909 altname ens801f0np0 00:24:20.909 inet 192.168.100.8/24 scope global mlx_0_0 00:24:20.910 valid_lft forever preferred_lft forever 00:24:20.910 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:20.910 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:24:20.910 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:20.910 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:20.910 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:20.910 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:20.910 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:24:20.910 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:24:20.910 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:24:20.910 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:20.910 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:24:20.910 altname enp175s0f1np1 00:24:20.910 altname ens801f1np1 00:24:20.910 inet 192.168.100.9/24 scope global mlx_0_1 00:24:20.910 valid_lft forever preferred_lft forever 00:24:20.910 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:24:20.910 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:20.910 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:20.910 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:24:20.910 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:24:20.910 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:24:20.910 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:20.910 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:20.910 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:20.910 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:24:21.168 192.168.100.9' 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:24:21.168 192.168.100.9' 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # head -n 1 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:24:21.168 192.168.100.9' 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # tail -n +2 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # head -n 1 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1545107 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1545107 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 1545107 ']' 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.168 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:21.169 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.169 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:21.169 11:06:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1ff520a1b4faa35a62dc3282d4a211fd 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.DLu 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1ff520a1b4faa35a62dc3282d4a211fd 0 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1ff520a1b4faa35a62dc3282d4a211fd 0 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1ff520a1b4faa35a62dc3282d4a211fd 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.DLu 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.DLu 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.DLu 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=699cba32f87c382c652c8965a927ad24d399d0ef093cd985386ce770fe2f6a2e 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.GcM 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 699cba32f87c382c652c8965a927ad24d399d0ef093cd985386ce770fe2f6a2e 3 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 699cba32f87c382c652c8965a927ad24d399d0ef093cd985386ce770fe2f6a2e 3 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=699cba32f87c382c652c8965a927ad24d399d0ef093cd985386ce770fe2f6a2e 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.GcM 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.GcM 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.GcM 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ba6fe7191da9ab4372c01dba27070d10bf42432f8f9f3b90 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.xJQ 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ba6fe7191da9ab4372c01dba27070d10bf42432f8f9f3b90 0 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ba6fe7191da9ab4372c01dba27070d10bf42432f8f9f3b90 0 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ba6fe7191da9ab4372c01dba27070d10bf42432f8f9f3b90 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:21.430 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:21.687 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.xJQ 00:24:21.687 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.xJQ 00:24:21.687 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.xJQ 00:24:21.687 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:21.687 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:21.687 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:21.687 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:21.687 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:21.687 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:21.687 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:21.687 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=84fa20adc7ea613817792e79e7f7687e75eadd73f6199658 00:24:21.687 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:21.687 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.hj2 00:24:21.687 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 84fa20adc7ea613817792e79e7f7687e75eadd73f6199658 2 00:24:21.687 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 84fa20adc7ea613817792e79e7f7687e75eadd73f6199658 2 00:24:21.687 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=84fa20adc7ea613817792e79e7f7687e75eadd73f6199658 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.hj2 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.hj2 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.hj2 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=aac48aab62c08a8d881a3c010373df04 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.wLJ 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key aac48aab62c08a8d881a3c010373df04 1 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 aac48aab62c08a8d881a3c010373df04 1 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=aac48aab62c08a8d881a3c010373df04 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.wLJ 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.wLJ 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.wLJ 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=79cce93b01722b072b10005d08249602 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.wxZ 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 79cce93b01722b072b10005d08249602 1 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 79cce93b01722b072b10005d08249602 1 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=79cce93b01722b072b10005d08249602 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.wxZ 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.wxZ 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.wxZ 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=71723c8831d34ddbe971aeb47f55ab46fa27144a4fe34e53 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.YPn 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 71723c8831d34ddbe971aeb47f55ab46fa27144a4fe34e53 2 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 71723c8831d34ddbe971aeb47f55ab46fa27144a4fe34e53 2 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=71723c8831d34ddbe971aeb47f55ab46fa27144a4fe34e53 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:21.688 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.YPn 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.YPn 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.YPn 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=37c494cd81aed5c89b12111237f98be1 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.DQx 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 37c494cd81aed5c89b12111237f98be1 0 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 37c494cd81aed5c89b12111237f98be1 0 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=37c494cd81aed5c89b12111237f98be1 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.DQx 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.DQx 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.DQx 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d7f66299b5df9244af7b1bb11bdb5ea644ab47949f48d2ae420953de6c5a0790 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.DBU 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d7f66299b5df9244af7b1bb11bdb5ea644ab47949f48d2ae420953de6c5a0790 3 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d7f66299b5df9244af7b1bb11bdb5ea644ab47949f48d2ae420953de6c5a0790 3 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d7f66299b5df9244af7b1bb11bdb5ea644ab47949f48d2ae420953de6c5a0790 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.DBU 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.DBU 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.DBU 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1545107 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 1545107 ']' 00:24:21.946 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.947 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:21.947 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.947 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:21.947 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DLu 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.GcM ]] 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GcM 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.xJQ 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.hj2 ]] 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hj2 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.wLJ 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.wxZ ]] 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wxZ 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.YPn 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.DQx ]] 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.DQx 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:22.205 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.DBU 00:24:22.206 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.206 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.206 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.206 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:22.206 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:22.206 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:22.206 11:06:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.206 11:06:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.206 11:06:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.206 11:06:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.206 11:06:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.206 11:06:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:22.206 11:06:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:22.206 11:06:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:22.206 11:06:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:22.206 11:06:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:22.206 11:06:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:24:22.206 11:06:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:24:22.206 11:06:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:22.206 11:06:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:22.206 11:06:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:22.206 11:06:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:22.206 11:06:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:24:22.206 11:06:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:22.206 11:06:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:22.206 11:06:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:22.206 11:06:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:24:25.486 Waiting for block devices as requested 00:24:25.486 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:25.486 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:25.486 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:25.486 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:25.486 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:25.486 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:25.486 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:25.486 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:25.486 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:25.744 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:25.744 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:25.744 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:25.744 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:26.002 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:26.002 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:26.002 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:26.259 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:26.824 No valid GPT data, bailing 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo rdma 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:26.824 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:24:27.082 00:24:27.082 Discovery Log Number of Records 2, Generation counter 2 00:24:27.082 =====Discovery Log Entry 0====== 00:24:27.082 trtype: rdma 00:24:27.082 adrfam: ipv4 00:24:27.082 subtype: current discovery subsystem 00:24:27.082 treq: not specified, sq flow control disable supported 00:24:27.082 portid: 1 00:24:27.082 trsvcid: 4420 00:24:27.082 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:27.082 traddr: 192.168.100.8 00:24:27.082 eflags: none 00:24:27.082 rdma_prtype: not specified 00:24:27.082 rdma_qptype: connected 00:24:27.082 rdma_cms: rdma-cm 00:24:27.082 rdma_pkey: 0x0000 00:24:27.082 =====Discovery Log Entry 1====== 00:24:27.082 trtype: rdma 00:24:27.082 adrfam: ipv4 00:24:27.082 subtype: nvme subsystem 00:24:27.082 treq: not specified, sq flow control disable supported 00:24:27.082 portid: 1 00:24:27.082 trsvcid: 4420 00:24:27.082 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:27.082 traddr: 192.168.100.8 00:24:27.082 eflags: none 00:24:27.082 rdma_prtype: not specified 00:24:27.082 rdma_qptype: connected 00:24:27.082 rdma_cms: rdma-cm 00:24:27.082 rdma_pkey: 0x0000 00:24:27.082 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:27.082 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:27.082 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:27.082 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:27.082 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.082 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:27.082 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:27.082 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:27.082 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:27.082 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:27.082 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:27.082 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:27.082 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: ]] 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.083 nvme0n1 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.083 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.341 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.341 11:06:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: ]] 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.341 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.598 nvme0n1 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: ]] 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.598 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.856 nvme0n1 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: ]] 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.856 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.114 nvme0n1 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: ]] 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:28.114 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:28.115 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:28.115 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:28.115 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:28.115 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.115 11:06:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.372 nvme0n1 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.372 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.630 nvme0n1 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: ]] 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.630 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.888 nvme0n1 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: ]] 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:28.888 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:28.889 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.889 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.146 nvme0n1 00:24:29.146 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.146 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.146 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.146 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.146 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.146 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.146 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.146 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.146 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.146 11:06:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: ]] 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.146 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.404 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.404 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.404 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.404 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.404 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.404 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.404 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.404 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:29.404 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:29.404 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:29.404 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:29.404 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:29.404 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:29.404 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.404 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.404 nvme0n1 00:24:29.404 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.404 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.404 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.404 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.404 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.404 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.661 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.661 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.661 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.661 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.661 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.661 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.661 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:29.661 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.661 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:29.661 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:29.661 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:29.661 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:29.661 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:29.661 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: ]] 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.662 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.919 nvme0n1 00:24:29.919 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.920 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.178 nvme0n1 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: ]] 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.178 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.179 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.179 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.179 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.179 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.179 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:30.179 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:30.179 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:30.179 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:30.179 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:30.179 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:30.179 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.179 11:06:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.436 nvme0n1 00:24:30.436 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.436 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.436 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.436 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.436 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.436 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.436 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.436 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.436 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.436 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: ]] 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.694 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.952 nvme0n1 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: ]] 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.952 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.210 nvme0n1 00:24:31.210 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.210 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.210 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.210 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.210 11:06:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.210 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.210 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.210 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.210 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.210 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.210 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.210 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.210 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:31.210 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.210 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:31.210 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:31.210 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:31.210 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:31.210 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:31.210 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:31.210 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:31.210 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: ]] 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.211 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.777 nvme0n1 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.777 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.036 nvme0n1 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: ]] 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.036 11:06:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.602 nvme0n1 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: ]] 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.602 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.171 nvme0n1 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: ]] 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.171 11:06:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.508 nvme0n1 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: ]] 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:33.508 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:33.837 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:33.837 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.837 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.095 nvme0n1 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.095 11:06:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.660 nvme0n1 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: ]] 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.660 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.226 nvme0n1 00:24:35.226 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.226 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.226 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.226 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.226 11:06:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: ]] 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.226 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.158 nvme0n1 00:24:36.158 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.158 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.158 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.158 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.158 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.158 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.158 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.158 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.158 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.158 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.158 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.158 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.158 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:36.158 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.158 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.158 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:36.158 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:36.158 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:36.158 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:36.158 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.158 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:36.158 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: ]] 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.159 11:06:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.736 nvme0n1 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: ]] 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.736 11:06:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.392 nvme0n1 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.393 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.326 nvme0n1 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: ]] 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:38.326 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:38.327 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.327 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:38.327 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.327 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.327 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.327 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.327 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.327 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.327 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.327 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.327 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.327 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:38.327 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:38.327 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:38.327 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:38.327 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:38.327 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:38.327 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.327 11:06:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.327 nvme0n1 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: ]] 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.327 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.586 nvme0n1 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: ]] 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:38.586 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.844 nvme0n1 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.844 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: ]] 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:39.102 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:39.103 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:39.103 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:39.103 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:39.103 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.103 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.103 nvme0n1 00:24:39.103 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.103 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.103 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.103 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.103 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.103 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.103 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.361 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.361 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.361 11:06:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.361 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.361 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.361 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:39.361 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.361 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:39.361 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.362 nvme0n1 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.362 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: ]] 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.620 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.878 nvme0n1 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: ]] 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:39.878 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.879 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.879 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.879 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.879 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:39.879 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:39.879 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:39.879 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.879 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.879 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:39.879 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:39.879 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:39.879 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:39.879 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:39.879 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:39.879 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.879 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.136 nvme0n1 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: ]] 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.137 11:06:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.395 nvme0n1 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: ]] 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.395 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.653 nvme0n1 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:40.653 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.654 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.912 nvme0n1 00:24:40.912 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.912 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.912 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.912 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.912 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.912 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.912 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.912 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.912 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.912 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.912 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.912 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: ]] 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.170 11:06:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.428 nvme0n1 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: ]] 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.428 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.686 nvme0n1 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: ]] 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:41.686 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.687 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.945 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.945 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.945 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:41.945 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:41.945 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:41.945 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.945 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.945 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:41.945 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:41.945 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:41.945 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:41.945 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:41.945 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:41.945 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.945 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.203 nvme0n1 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: ]] 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:42.203 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:42.204 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:42.204 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.204 11:06:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.461 nvme0n1 00:24:42.461 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.461 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.461 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.461 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.461 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.461 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.461 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.461 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.461 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.461 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.461 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.461 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.461 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:42.461 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.461 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:42.461 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:42.461 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:42.461 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:42.461 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:42.461 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:42.461 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.462 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.028 nvme0n1 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: ]] 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.028 11:06:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.286 nvme0n1 00:24:43.286 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.286 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.286 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.286 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.286 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.286 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.286 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.286 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.286 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.286 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: ]] 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.544 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.802 nvme0n1 00:24:43.802 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.802 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.802 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.802 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.802 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.802 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.802 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.802 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.802 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.802 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: ]] 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.059 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.060 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.060 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.060 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.060 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.060 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.060 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:44.060 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:44.060 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:44.060 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:44.060 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:44.060 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:44.060 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.060 11:06:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.318 nvme0n1 00:24:44.318 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.318 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.318 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.318 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.318 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.318 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.318 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.318 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.318 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.318 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.318 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.318 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.318 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:44.318 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.318 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:44.318 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:44.318 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:44.318 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: ]] 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.576 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.834 nvme0n1 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:44.834 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:44.835 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:44.835 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:44.835 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:44.835 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:44.835 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.835 11:06:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.400 nvme0n1 00:24:45.400 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.400 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.400 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: ]] 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.401 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.966 nvme0n1 00:24:45.966 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: ]] 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.224 11:06:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.790 nvme0n1 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: ]] 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.790 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.791 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.791 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.791 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.791 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.791 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.791 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.791 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:46.791 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:46.791 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:46.791 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:46.791 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:46.791 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:46.791 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.791 11:06:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.725 nvme0n1 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: ]] 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:47.725 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:47.726 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:47.726 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.726 11:06:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.290 nvme0n1 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:48.290 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.291 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.291 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.291 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.291 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:48.291 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:48.291 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:48.291 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.291 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.291 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:48.291 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:48.291 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:48.291 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:48.291 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:48.291 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:48.291 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.291 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.856 nvme0n1 00:24:48.856 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.856 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.856 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.856 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.856 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.856 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.113 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.113 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.113 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: ]] 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.114 nvme0n1 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.114 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.372 11:06:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: ]] 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.372 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.630 nvme0n1 00:24:49.630 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.630 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.630 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.630 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.630 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.630 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.630 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.630 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.630 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.630 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.630 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.630 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.630 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:49.630 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.630 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:49.630 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:49.630 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:49.630 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: ]] 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.631 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.890 nvme0n1 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: ]] 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.890 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.149 nvme0n1 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.149 11:06:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.408 nvme0n1 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: ]] 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.408 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.667 nvme0n1 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: ]] 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.667 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.926 nvme0n1 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: ]] 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.926 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.185 nvme0n1 00:24:51.185 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.185 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.185 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.185 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.185 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.185 11:06:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: ]] 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:51.185 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:51.186 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:51.186 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:51.186 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.186 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.444 nvme0n1 00:24:51.444 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.444 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.444 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.444 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.444 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.444 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.704 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.963 nvme0n1 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: ]] 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.963 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.223 nvme0n1 00:24:52.223 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.223 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.223 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.223 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.223 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.223 11:06:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: ]] 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.223 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.481 nvme0n1 00:24:52.481 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.481 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.481 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.481 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.481 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: ]] 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.740 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.999 nvme0n1 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: ]] 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:52.999 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:53.000 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.000 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.000 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:53.000 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:53.000 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:53.000 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:53.000 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:53.000 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:53.000 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.000 11:06:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.258 nvme0n1 00:24:53.258 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.258 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.258 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.258 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.258 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.258 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.517 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.518 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.518 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.518 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:53.518 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:53.518 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:53.518 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.518 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.518 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:53.518 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:53.518 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:53.518 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:53.518 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:53.518 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:53.518 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.518 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.776 nvme0n1 00:24:53.776 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.776 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: ]] 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.777 11:06:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.345 nvme0n1 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: ]] 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:54.345 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.346 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.913 nvme0n1 00:24:54.913 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.913 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.913 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.913 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.913 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: ]] 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.914 11:06:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.173 nvme0n1 00:24:55.173 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.173 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.173 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.173 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.173 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.173 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.431 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.431 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.431 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.431 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.431 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.431 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.431 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:55.431 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.431 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:55.431 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: ]] 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.432 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.690 nvme0n1 00:24:55.690 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.690 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.690 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.690 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.690 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.690 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.951 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.951 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.951 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.951 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.951 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.951 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.951 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:55.951 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.951 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:55.951 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:55.951 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:55.952 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:55.953 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:55.953 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.953 11:06:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.216 nvme0n1 00:24:56.216 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.216 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.216 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.216 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.216 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.216 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.216 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.216 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.216 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.216 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZmNTIwYTFiNGZhYTM1YTYyZGMzMjgyZDRhMjExZmQni+Ul: 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: ]] 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njk5Y2JhMzJmODdjMzgyYzY1MmM4OTY1YTkyN2FkMjRkMzk5ZDBlZjA5M2NkOTg1Mzg2Y2U3NzBmZTJmNmEyZUy0ljw=: 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.475 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.042 nvme0n1 00:24:57.042 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.042 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.042 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.042 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: ]] 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.043 11:06:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.611 nvme0n1 00:24:57.611 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.611 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.611 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.611 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.611 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.611 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: ]] 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.870 11:06:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.437 nvme0n1 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE3MjNjODgzMWQzNGRkYmU5NzFhZWI0N2Y1NWFiNDZmYTI3MTQ0YTRmZTM0ZTUzEXSo5A==: 00:24:58.437 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: ]] 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzdjNDk0Y2Q4MWFlZDVjODliMTIxMTEyMzdmOThiZTHyIBP6: 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.438 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.005 nvme0n1 00:24:59.005 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.005 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.005 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.005 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.005 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.005 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.264 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.264 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.264 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.264 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.264 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.264 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.264 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:59.264 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.264 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:59.264 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDdmNjYyOTliNWRmOTI0NGFmN2IxYmIxMWJkYjVlYTY0NGFiNDc5NDlmNDhkMmFlNDIwOTUzZGU2YzVhMDc5MP1pb7E=: 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.265 11:06:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.833 nvme0n1 00:24:59.833 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.833 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.833 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.833 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.833 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.833 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.833 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.833 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.833 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.833 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.833 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.833 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:59.833 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.833 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:59.833 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:59.833 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:59.833 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:59.833 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:59.833 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:59.833 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:59.833 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: ]] 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.834 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.093 request: 00:25:00.093 { 00:25:00.093 "name": "nvme0", 00:25:00.093 "trtype": "rdma", 00:25:00.093 "traddr": "192.168.100.8", 00:25:00.093 "adrfam": "ipv4", 00:25:00.093 "trsvcid": "4420", 00:25:00.093 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:00.093 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:00.093 "prchk_reftag": false, 00:25:00.093 "prchk_guard": false, 00:25:00.093 "hdgst": false, 00:25:00.093 "ddgst": false, 00:25:00.093 "allow_unrecognized_csi": false, 00:25:00.093 "method": "bdev_nvme_attach_controller", 00:25:00.093 "req_id": 1 00:25:00.093 } 00:25:00.093 Got JSON-RPC error response 00:25:00.093 response: 00:25:00.093 { 00:25:00.093 "code": -5, 00:25:00.093 "message": "Input/output error" 00:25:00.093 } 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.093 request: 00:25:00.093 { 00:25:00.093 "name": "nvme0", 00:25:00.093 "trtype": "rdma", 00:25:00.093 "traddr": "192.168.100.8", 00:25:00.093 "adrfam": "ipv4", 00:25:00.093 "trsvcid": "4420", 00:25:00.093 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:00.093 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:00.093 "prchk_reftag": false, 00:25:00.093 "prchk_guard": false, 00:25:00.093 "hdgst": false, 00:25:00.093 "ddgst": false, 00:25:00.093 "dhchap_key": "key2", 00:25:00.093 "allow_unrecognized_csi": false, 00:25:00.093 "method": "bdev_nvme_attach_controller", 00:25:00.093 "req_id": 1 00:25:00.093 } 00:25:00.093 Got JSON-RPC error response 00:25:00.093 response: 00:25:00.093 { 00:25:00.093 "code": -5, 00:25:00.093 "message": "Input/output error" 00:25:00.093 } 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:25:00.093 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:25:00.094 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:00.094 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:00.094 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:00.094 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:00.094 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:00.094 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:00.094 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:00.094 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:00.094 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.094 11:06:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.353 request: 00:25:00.353 { 00:25:00.353 "name": "nvme0", 00:25:00.353 "trtype": "rdma", 00:25:00.353 "traddr": "192.168.100.8", 00:25:00.353 "adrfam": "ipv4", 00:25:00.353 "trsvcid": "4420", 00:25:00.353 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:00.353 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:00.353 "prchk_reftag": false, 00:25:00.353 "prchk_guard": false, 00:25:00.353 "hdgst": false, 00:25:00.353 "ddgst": false, 00:25:00.353 "dhchap_key": "key1", 00:25:00.353 "dhchap_ctrlr_key": "ckey2", 00:25:00.353 "allow_unrecognized_csi": false, 00:25:00.353 "method": "bdev_nvme_attach_controller", 00:25:00.353 "req_id": 1 00:25:00.353 } 00:25:00.353 Got JSON-RPC error response 00:25:00.353 response: 00:25:00.353 { 00:25:00.353 "code": -5, 00:25:00.353 "message": "Input/output error" 00:25:00.353 } 00:25:00.353 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:00.353 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:00.353 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:00.353 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:00.353 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:00.353 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:00.353 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:00.353 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:00.353 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:00.353 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.353 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.353 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:25:00.353 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:00.353 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:25:00.353 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:25:00.353 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:25:00.353 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:00.353 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.353 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.612 nvme0n1 00:25:00.612 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: ]] 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.613 request: 00:25:00.613 { 00:25:00.613 "name": "nvme0", 00:25:00.613 "dhchap_key": "key1", 00:25:00.613 "dhchap_ctrlr_key": "ckey2", 00:25:00.613 "method": "bdev_nvme_set_keys", 00:25:00.613 "req_id": 1 00:25:00.613 } 00:25:00.613 Got JSON-RPC error response 00:25:00.613 response: 00:25:00.613 { 00:25:00.613 "code": -13, 00:25:00.613 "message": "Permission denied" 00:25:00.613 } 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:00.613 11:06:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:01.989 11:06:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.989 11:06:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:01.989 11:06:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.989 11:06:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.989 11:06:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.989 11:06:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:01.989 11:06:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE2ZmU3MTkxZGE5YWI0MzcyYzAxZGJhMjcwNzBkMTBiZjQyNDMyZjhmOWYzYjkwyvpDfw==: 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: ]] 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODRmYTIwYWRjN2VhNjEzODE3NzkyZTc5ZTdmNzY4N2U3NWVhZGQ3M2Y2MTk5NjU4eS6kLQ==: 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.926 nvme0n1 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWFjNDhhYWI2MmMwOGE4ZDg4MWEzYzAxMDM3M2RmMDRjr1PB: 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: ]] 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzljY2U5M2IwMTcyMmIwNzJiMTAwMDVkMDgyNDk2MDI8Y8J3: 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.926 request: 00:25:02.926 { 00:25:02.926 "name": "nvme0", 00:25:02.926 "dhchap_key": "key2", 00:25:02.926 "dhchap_ctrlr_key": "ckey1", 00:25:02.926 "method": "bdev_nvme_set_keys", 00:25:02.926 "req_id": 1 00:25:02.926 } 00:25:02.926 Got JSON-RPC error response 00:25:02.926 response: 00:25:02.926 { 00:25:02.926 "code": -13, 00:25:02.926 "message": "Permission denied" 00:25:02.926 } 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.926 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.184 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.184 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:03.184 11:06:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:04.119 11:06:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.119 11:06:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:04.119 11:06:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.119 11:06:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.119 11:06:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.119 11:06:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:04.119 11:06:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:05.054 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.054 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:05.054 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.054 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.054 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.313 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:05.313 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:05.313 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:05.313 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:05.313 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:05.313 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:05.313 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:05.313 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:05.313 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:05.313 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:05.313 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:05.313 rmmod nvme_rdma 00:25:05.313 rmmod nvme_fabrics 00:25:05.313 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:05.313 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:05.313 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:05.313 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1545107 ']' 00:25:05.313 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1545107 00:25:05.313 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 1545107 ']' 00:25:05.313 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 1545107 00:25:05.313 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:25:05.313 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:05.313 11:06:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1545107 00:25:05.313 11:06:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:05.313 11:06:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:05.313 11:06:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1545107' 00:25:05.313 killing process with pid 1545107 00:25:05.313 11:06:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 1545107 00:25:05.313 11:06:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 1545107 00:25:05.313 11:06:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:05.313 11:06:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:25:05.313 11:06:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:05.313 11:06:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:05.313 11:06:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:05.313 11:06:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:05.313 11:06:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:05.572 11:06:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:05.572 11:06:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:05.572 11:06:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:05.572 11:06:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:05.572 11:06:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:05.572 11:06:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:25:05.572 11:06:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:25:08.103 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:08.103 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:08.103 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:08.103 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:08.103 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:08.103 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:08.104 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:08.104 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:08.104 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:08.104 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:08.104 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:08.104 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:08.104 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:08.104 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:08.104 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:08.104 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:09.040 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:09.040 11:06:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.DLu /tmp/spdk.key-null.xJQ /tmp/spdk.key-sha256.wLJ /tmp/spdk.key-sha384.YPn /tmp/spdk.key-sha512.DBU /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:25:09.040 11:06:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:25:11.571 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:11.571 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:11.571 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:11.571 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:11.571 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:11.571 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:11.571 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:11.571 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:11.571 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:11.571 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:11.571 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:11.571 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:11.571 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:11.571 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:11.571 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:11.571 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:11.571 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:11.571 00:25:11.571 real 0m56.189s 00:25:11.571 user 0m52.996s 00:25:11.571 sys 0m12.140s 00:25:11.571 11:07:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:11.571 11:07:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.571 ************************************ 00:25:11.571 END TEST nvmf_auth_host 00:25:11.571 ************************************ 00:25:11.571 11:07:00 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:25:11.571 11:07:00 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:25:11.571 11:07:00 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:25:11.571 11:07:00 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:25:11.571 11:07:00 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:25:11.571 11:07:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:11.571 11:07:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:11.571 11:07:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.571 ************************************ 00:25:11.571 START TEST nvmf_bdevperf 00:25:11.571 ************************************ 00:25:11.571 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:25:11.830 * Looking for test storage... 00:25:11.830 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:11.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.830 --rc genhtml_branch_coverage=1 00:25:11.830 --rc genhtml_function_coverage=1 00:25:11.830 --rc genhtml_legend=1 00:25:11.830 --rc geninfo_all_blocks=1 00:25:11.830 --rc geninfo_unexecuted_blocks=1 00:25:11.830 00:25:11.830 ' 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:11.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.830 --rc genhtml_branch_coverage=1 00:25:11.830 --rc genhtml_function_coverage=1 00:25:11.830 --rc genhtml_legend=1 00:25:11.830 --rc geninfo_all_blocks=1 00:25:11.830 --rc geninfo_unexecuted_blocks=1 00:25:11.830 00:25:11.830 ' 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:11.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.830 --rc genhtml_branch_coverage=1 00:25:11.830 --rc genhtml_function_coverage=1 00:25:11.830 --rc genhtml_legend=1 00:25:11.830 --rc geninfo_all_blocks=1 00:25:11.830 --rc geninfo_unexecuted_blocks=1 00:25:11.830 00:25:11.830 ' 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:11.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.830 --rc genhtml_branch_coverage=1 00:25:11.830 --rc genhtml_function_coverage=1 00:25:11.830 --rc genhtml_legend=1 00:25:11.830 --rc geninfo_all_blocks=1 00:25:11.830 --rc geninfo_unexecuted_blocks=1 00:25:11.830 00:25:11.830 ' 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:11.830 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:11.830 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:11.831 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:11.831 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:11.831 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:25:11.831 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.831 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:11.831 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:11.831 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:11.831 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.831 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.831 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.831 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:11.831 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:11.831 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:11.831 11:07:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:25:18.477 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:25:18.477 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:25:18.477 Found net devices under 0000:af:00.0: mlx_0_0 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:25:18.477 Found net devices under 0000:af:00.1: mlx_0_1 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # rdma_device_init 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:18.477 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:18.478 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:18.478 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:25:18.478 altname enp175s0f0np0 00:25:18.478 altname ens801f0np0 00:25:18.478 inet 192.168.100.8/24 scope global mlx_0_0 00:25:18.478 valid_lft forever preferred_lft forever 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:18.478 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:18.478 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:25:18.478 altname enp175s0f1np1 00:25:18.478 altname ens801f1np1 00:25:18.478 inet 192.168.100.9/24 scope global mlx_0_1 00:25:18.478 valid_lft forever preferred_lft forever 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:25:18.478 192.168.100.9' 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:25:18.478 192.168.100.9' 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # head -n 1 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:25:18.478 192.168.100.9' 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # tail -n +2 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # head -n 1 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1559588 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1559588 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 1559588 ']' 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:18.478 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:18.478 [2024-11-15 11:07:06.335467] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:25:18.478 [2024-11-15 11:07:06.335523] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.478 [2024-11-15 11:07:06.399919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:18.478 [2024-11-15 11:07:06.446090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.478 [2024-11-15 11:07:06.446122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.478 [2024-11-15 11:07:06.446130] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.478 [2024-11-15 11:07:06.446136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.478 [2024-11-15 11:07:06.446141] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.479 [2024-11-15 11:07:06.450181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:18.479 [2024-11-15 11:07:06.450248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:18.479 [2024-11-15 11:07:06.450250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:18.479 [2024-11-15 11:07:06.615588] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9f29e0/0x9f6ed0) succeed. 00:25:18.479 [2024-11-15 11:07:06.624774] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9f3fd0/0xa38570) succeed. 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:18.479 Malloc0 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:18.479 [2024-11-15 11:07:06.767031] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:18.479 { 00:25:18.479 "params": { 00:25:18.479 "name": "Nvme$subsystem", 00:25:18.479 "trtype": "$TEST_TRANSPORT", 00:25:18.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.479 "adrfam": "ipv4", 00:25:18.479 "trsvcid": "$NVMF_PORT", 00:25:18.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.479 "hdgst": ${hdgst:-false}, 00:25:18.479 "ddgst": ${ddgst:-false} 00:25:18.479 }, 00:25:18.479 "method": "bdev_nvme_attach_controller" 00:25:18.479 } 00:25:18.479 EOF 00:25:18.479 )") 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:18.479 11:07:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:18.479 "params": { 00:25:18.479 "name": "Nvme1", 00:25:18.479 "trtype": "rdma", 00:25:18.479 "traddr": "192.168.100.8", 00:25:18.479 "adrfam": "ipv4", 00:25:18.479 "trsvcid": "4420", 00:25:18.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:18.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:18.479 "hdgst": false, 00:25:18.479 "ddgst": false 00:25:18.479 }, 00:25:18.479 "method": "bdev_nvme_attach_controller" 00:25:18.479 }' 00:25:18.479 [2024-11-15 11:07:06.818125] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:25:18.479 [2024-11-15 11:07:06.818179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1559623 ] 00:25:18.479 [2024-11-15 11:07:06.881228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.479 [2024-11-15 11:07:06.922700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.479 Running I/O for 1 seconds... 00:25:19.414 17282.00 IOPS, 67.51 MiB/s 00:25:19.414 Latency(us) 00:25:19.414 [2024-11-15T10:07:08.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.414 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:19.414 Verification LBA range: start 0x0 length 0x4000 00:25:19.414 Nvme1n1 : 1.01 17330.39 67.70 0.00 0.00 7344.89 2749.66 10713.71 00:25:19.414 [2024-11-15T10:07:08.298Z] =================================================================================================================== 00:25:19.414 [2024-11-15T10:07:08.298Z] Total : 17330.39 67.70 0.00 0.00 7344.89 2749.66 10713.71 00:25:19.673 11:07:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1559854 00:25:19.673 11:07:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:19.673 11:07:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:19.673 11:07:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:19.673 11:07:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:19.673 11:07:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:19.673 11:07:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:19.673 11:07:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:19.673 { 00:25:19.673 "params": { 00:25:19.673 "name": "Nvme$subsystem", 00:25:19.673 "trtype": "$TEST_TRANSPORT", 00:25:19.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.673 "adrfam": "ipv4", 00:25:19.673 "trsvcid": "$NVMF_PORT", 00:25:19.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.673 "hdgst": ${hdgst:-false}, 00:25:19.673 "ddgst": ${ddgst:-false} 00:25:19.673 }, 00:25:19.673 "method": "bdev_nvme_attach_controller" 00:25:19.673 } 00:25:19.673 EOF 00:25:19.673 )") 00:25:19.673 11:07:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:19.673 11:07:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:19.673 11:07:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:19.673 11:07:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:19.673 "params": { 00:25:19.673 "name": "Nvme1", 00:25:19.673 "trtype": "rdma", 00:25:19.673 "traddr": "192.168.100.8", 00:25:19.673 "adrfam": "ipv4", 00:25:19.673 "trsvcid": "4420", 00:25:19.673 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:19.673 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:19.673 "hdgst": false, 00:25:19.673 "ddgst": false 00:25:19.673 }, 00:25:19.673 "method": "bdev_nvme_attach_controller" 00:25:19.673 }' 00:25:19.673 [2024-11-15 11:07:08.345188] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:25:19.673 [2024-11-15 11:07:08.345238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1559854 ] 00:25:19.673 [2024-11-15 11:07:08.412802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.673 [2024-11-15 11:07:08.452246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.931 Running I/O for 15 seconds... 00:25:21.798 17442.00 IOPS, 68.13 MiB/s [2024-11-15T10:07:11.613Z] 17528.00 IOPS, 68.47 MiB/s [2024-11-15T10:07:11.613Z] 11:07:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1559588 00:25:22.729 11:07:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:23.558 15604.00 IOPS, 60.95 MiB/s [2024-11-15T10:07:12.442Z] [2024-11-15 11:07:12.335217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x189c00 00:25:23.558 [2024-11-15 11:07:12.335272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.558 [2024-11-15 11:07:12.335289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x189c00 00:25:23.558 [2024-11-15 11:07:12.335296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.558 [2024-11-15 11:07:12.335306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x189c00 00:25:23.558 [2024-11-15 11:07:12.335313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.558 [2024-11-15 11:07:12.335321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x189c00 00:25:23.558 [2024-11-15 11:07:12.335328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.558 [2024-11-15 11:07:12.335337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:112400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x189c00 00:25:23.559 [2024-11-15 11:07:12.335835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.559 [2024-11-15 11:07:12.335850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.559 [2024-11-15 11:07:12.335864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.559 [2024-11-15 11:07:12.335878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.559 [2024-11-15 11:07:12.335892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.559 [2024-11-15 11:07:12.335900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.559 [2024-11-15 11:07:12.335907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.335915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.335922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.335930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.335936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.335944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.335951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.335958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.335967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.335975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.335981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.335989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.335997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.560 [2024-11-15 11:07:12.336497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.560 [2024-11-15 11:07:12.336503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.336988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.336995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.337003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.337009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.337016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.337023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.337031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.561 [2024-11-15 11:07:12.337037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.561 [2024-11-15 11:07:12.337045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.562 [2024-11-15 11:07:12.337052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.562 [2024-11-15 11:07:12.337059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.562 [2024-11-15 11:07:12.337065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.562 [2024-11-15 11:07:12.337073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.562 [2024-11-15 11:07:12.337081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.562 [2024-11-15 11:07:12.337088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.562 [2024-11-15 11:07:12.337095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.562 [2024-11-15 11:07:12.337102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.562 [2024-11-15 11:07:12.337108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.562 [2024-11-15 11:07:12.337116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.562 [2024-11-15 11:07:12.337122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.562 [2024-11-15 11:07:12.337129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.562 [2024-11-15 11:07:12.337136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.562 [2024-11-15 11:07:12.337145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.562 [2024-11-15 11:07:12.337151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.562 [2024-11-15 11:07:12.337159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.562 [2024-11-15 11:07:12.337170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19c19000 sqhd:7210 p:0 m:0 dnr:0 00:25:23.562 [2024-11-15 11:07:12.338707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:23.562 [2024-11-15 11:07:12.338739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:23.562 [2024-11-15 11:07:12.338760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113368 len:8 PRP1 0x0 PRP2 0x0 00:25:23.562 [2024-11-15 11:07:12.338783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.562 [2024-11-15 11:07:12.342219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:23.562 [2024-11-15 11:07:12.356093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:25:23.562 [2024-11-15 11:07:12.359563] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:23.562 [2024-11-15 11:07:12.359583] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:23.562 [2024-11-15 11:07:12.359590] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:25:24.754 11703.00 IOPS, 45.71 MiB/s [2024-11-15T10:07:13.638Z] [2024-11-15 11:07:13.363461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:25:24.754 [2024-11-15 11:07:13.363513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:24.754 [2024-11-15 11:07:13.363796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:24.754 [2024-11-15 11:07:13.363805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:24.754 [2024-11-15 11:07:13.363818] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:25:24.754 [2024-11-15 11:07:13.363828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:24.754 [2024-11-15 11:07:13.370317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:24.754 [2024-11-15 11:07:13.373354] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:24.754 [2024-11-15 11:07:13.373386] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:24.754 [2024-11-15 11:07:13.373392] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:25:25.578 9362.40 IOPS, 36.57 MiB/s [2024-11-15T10:07:14.462Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1559588 Killed "${NVMF_APP[@]}" "$@" 00:25:25.578 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:25.578 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:25.578 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:25.578 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:25.578 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:25.578 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1560935 00:25:25.578 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1560935 00:25:25.578 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:25.578 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 1560935 ']' 00:25:25.578 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.578 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:25.578 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.578 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:25.578 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:25.578 [2024-11-15 11:07:14.365486] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:25:25.578 [2024-11-15 11:07:14.365531] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:25.578 [2024-11-15 11:07:14.377442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:25:25.578 [2024-11-15 11:07:14.377465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:25.578 [2024-11-15 11:07:14.377648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:25.578 [2024-11-15 11:07:14.377659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:25.579 [2024-11-15 11:07:14.377668] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:25:25.579 [2024-11-15 11:07:14.377678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:25.579 [2024-11-15 11:07:14.384349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:25.579 [2024-11-15 11:07:14.387083] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:25.579 [2024-11-15 11:07:14.387111] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:25.579 [2024-11-15 11:07:14.387118] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:25:25.579 [2024-11-15 11:07:14.429118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:25.838 [2024-11-15 11:07:14.472046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:25.838 [2024-11-15 11:07:14.472076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:25.838 [2024-11-15 11:07:14.472083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:25.838 [2024-11-15 11:07:14.472090] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:25.838 [2024-11-15 11:07:14.472095] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:25.838 [2024-11-15 11:07:14.473409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:25.838 [2024-11-15 11:07:14.473499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:25.838 [2024-11-15 11:07:14.473501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:25.838 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:25.838 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:25:25.838 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:25.838 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:25.838 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:25.838 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:25.838 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:25.838 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.838 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:25.838 [2024-11-15 11:07:14.633308] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22679e0/0x226bed0) succeed. 00:25:25.838 [2024-11-15 11:07:14.642723] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2268fd0/0x22ad570) succeed. 00:25:26.096 7802.00 IOPS, 30.48 MiB/s [2024-11-15T10:07:14.980Z] 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.096 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:26.096 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.096 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:26.096 Malloc0 00:25:26.096 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.096 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:26.096 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.096 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:26.096 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.096 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:26.096 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.096 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:26.096 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.096 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:26.096 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.096 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:26.096 [2024-11-15 11:07:14.786956] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:26.097 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.097 11:07:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1559854 00:25:26.662 [2024-11-15 11:07:15.391084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:25:26.662 [2024-11-15 11:07:15.391112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:26.662 [2024-11-15 11:07:15.391298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:26.662 [2024-11-15 11:07:15.391309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:26.662 [2024-11-15 11:07:15.391317] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:25:26.662 [2024-11-15 11:07:15.391331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:26.662 [2024-11-15 11:07:15.394357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:26.662 [2024-11-15 11:07:15.437596] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:25:27.854 7217.00 IOPS, 28.19 MiB/s [2024-11-15T10:07:17.672Z] 8514.50 IOPS, 33.26 MiB/s [2024-11-15T10:07:19.047Z] 9520.56 IOPS, 37.19 MiB/s [2024-11-15T10:07:19.982Z] 10329.10 IOPS, 40.35 MiB/s [2024-11-15T10:07:20.916Z] 10988.45 IOPS, 42.92 MiB/s [2024-11-15T10:07:21.849Z] 11533.00 IOPS, 45.05 MiB/s [2024-11-15T10:07:22.783Z] 11998.00 IOPS, 46.87 MiB/s [2024-11-15T10:07:23.717Z] 12397.50 IOPS, 48.43 MiB/s [2024-11-15T10:07:23.717Z] 12743.27 IOPS, 49.78 MiB/s 00:25:34.833 Latency(us) 00:25:34.833 [2024-11-15T10:07:23.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.833 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:34.833 Verification LBA range: start 0x0 length 0x4000 00:25:34.833 Nvme1n1 : 15.01 12742.94 49.78 9986.35 0.00 5610.85 373.98 1035810.73 00:25:34.833 [2024-11-15T10:07:23.717Z] =================================================================================================================== 00:25:34.833 [2024-11-15T10:07:23.717Z] Total : 12742.94 49.78 9986.35 0.00 5610.85 373.98 1035810.73 00:25:35.091 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:35.091 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:35.092 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.092 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:35.092 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.092 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:35.092 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:35.092 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:35.092 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:25:35.092 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:35.092 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:35.092 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:25:35.092 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:35.092 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:35.092 rmmod nvme_rdma 00:25:35.092 rmmod nvme_fabrics 00:25:35.092 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:35.092 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:25:35.092 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:25:35.092 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1560935 ']' 00:25:35.092 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1560935 00:25:35.092 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 1560935 ']' 00:25:35.092 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 1560935 00:25:35.092 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:25:35.092 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:35.092 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1560935 00:25:35.350 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:35.350 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:35.350 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1560935' 00:25:35.350 killing process with pid 1560935 00:25:35.350 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 1560935 00:25:35.350 11:07:23 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 1560935 00:25:35.350 11:07:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:35.350 11:07:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:25:35.350 00:25:35.350 real 0m23.773s 00:25:35.350 user 1m2.274s 00:25:35.350 sys 0m5.155s 00:25:35.350 11:07:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:35.350 11:07:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:35.350 ************************************ 00:25:35.350 END TEST nvmf_bdevperf 00:25:35.350 ************************************ 00:25:35.608 11:07:24 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:25:35.608 11:07:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:35.608 11:07:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:35.608 11:07:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.608 ************************************ 00:25:35.608 START TEST nvmf_target_disconnect 00:25:35.608 ************************************ 00:25:35.608 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:25:35.608 * Looking for test storage... 00:25:35.608 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:35.608 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:35.608 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:25:35.608 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:35.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.609 --rc genhtml_branch_coverage=1 00:25:35.609 --rc genhtml_function_coverage=1 00:25:35.609 --rc genhtml_legend=1 00:25:35.609 --rc geninfo_all_blocks=1 00:25:35.609 --rc geninfo_unexecuted_blocks=1 00:25:35.609 00:25:35.609 ' 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:35.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.609 --rc genhtml_branch_coverage=1 00:25:35.609 --rc genhtml_function_coverage=1 00:25:35.609 --rc genhtml_legend=1 00:25:35.609 --rc geninfo_all_blocks=1 00:25:35.609 --rc geninfo_unexecuted_blocks=1 00:25:35.609 00:25:35.609 ' 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:35.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.609 --rc genhtml_branch_coverage=1 00:25:35.609 --rc genhtml_function_coverage=1 00:25:35.609 --rc genhtml_legend=1 00:25:35.609 --rc geninfo_all_blocks=1 00:25:35.609 --rc geninfo_unexecuted_blocks=1 00:25:35.609 00:25:35.609 ' 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:35.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.609 --rc genhtml_branch_coverage=1 00:25:35.609 --rc genhtml_function_coverage=1 00:25:35.609 --rc genhtml_legend=1 00:25:35.609 --rc geninfo_all_blocks=1 00:25:35.609 --rc geninfo_unexecuted_blocks=1 00:25:35.609 00:25:35.609 ' 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:35.609 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:35.868 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:25:35.868 11:07:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:25:41.134 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:25:41.134 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:25:41.134 Found net devices under 0000:af:00.0: mlx_0_0 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:25:41.134 Found net devices under 0000:af:00.1: mlx_0_1 00:25:41.134 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:41.135 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:41.135 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:25:41.135 altname enp175s0f0np0 00:25:41.135 altname ens801f0np0 00:25:41.135 inet 192.168.100.8/24 scope global mlx_0_0 00:25:41.135 valid_lft forever preferred_lft forever 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:41.135 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:41.135 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:25:41.135 altname enp175s0f1np1 00:25:41.135 altname ens801f1np1 00:25:41.135 inet 192.168.100.9/24 scope global mlx_0_1 00:25:41.135 valid_lft forever preferred_lft forever 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:25:41.135 192.168.100.9' 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:25:41.135 192.168.100.9' 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:25:41.135 192.168.100.9' 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:25:41.135 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:25:41.136 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:41.136 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:41.136 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:41.136 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:41.136 ************************************ 00:25:41.136 START TEST nvmf_target_disconnect_tc1 00:25:41.136 ************************************ 00:25:41.136 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:25:41.136 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:41.136 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:25:41.136 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:41.136 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:25:41.136 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:41.136 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:25:41.136 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:41.136 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:25:41.136 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:41.136 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:25:41.136 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:25:41.136 11:07:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:41.394 [2024-11-15 11:07:30.043748] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:41.394 [2024-11-15 11:07:30.043786] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:41.394 [2024-11-15 11:07:30.043794] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:25:42.330 [2024-11-15 11:07:31.047760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:25:42.330 [2024-11-15 11:07:31.047827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state. 00:25:42.330 [2024-11-15 11:07:31.047854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state 00:25:42.330 [2024-11-15 11:07:31.047911] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:42.330 [2024-11-15 11:07:31.047934] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:25:42.330 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:25:42.330 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:42.330 Initializing NVMe Controllers 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:42.330 00:25:42.330 real 0m1.133s 00:25:42.330 user 0m0.930s 00:25:42.330 sys 0m0.190s 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:42.330 ************************************ 00:25:42.330 END TEST nvmf_target_disconnect_tc1 00:25:42.330 ************************************ 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:42.330 ************************************ 00:25:42.330 START TEST nvmf_target_disconnect_tc2 00:25:42.330 ************************************ 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1565795 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1565795 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1565795 ']' 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:42.330 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:42.330 [2024-11-15 11:07:31.189861] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:25:42.330 [2024-11-15 11:07:31.189903] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.589 [2024-11-15 11:07:31.265939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:42.589 [2024-11-15 11:07:31.308332] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.589 [2024-11-15 11:07:31.308371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.589 [2024-11-15 11:07:31.308379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.589 [2024-11-15 11:07:31.308385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.589 [2024-11-15 11:07:31.308390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.589 [2024-11-15 11:07:31.309955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:42.589 [2024-11-15 11:07:31.310062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:42.589 [2024-11-15 11:07:31.310210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:42.589 [2024-11-15 11:07:31.310210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:25:42.589 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:42.589 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:25:42.589 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:42.589 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:42.589 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:42.589 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.589 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:42.589 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.589 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:42.847 Malloc0 00:25:42.847 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.847 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:25:42.847 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.847 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:42.847 [2024-11-15 11:07:31.509559] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf21060/0xf2cfc0) succeed. 00:25:42.847 [2024-11-15 11:07:31.519170] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf226f0/0xf6e660) succeed. 00:25:42.847 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.847 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:42.847 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.847 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:42.848 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.848 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:42.848 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.848 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:42.848 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.848 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:42.848 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.848 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:42.848 [2024-11-15 11:07:31.660297] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:42.848 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.848 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:25:42.848 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.848 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:42.848 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.848 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1566029 00:25:42.848 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:42.848 11:07:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:45.377 11:07:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1565795 00:25:45.377 11:07:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:46.311 Read completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Read completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Write completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Write completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Write completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Write completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Write completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Read completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Read completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Read completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Read completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Read completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Write completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Write completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Read completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Write completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Read completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Write completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Write completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Write completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Read completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Read completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Write completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.311 Write completed with error (sct=0, sc=8) 00:25:46.311 starting I/O failed 00:25:46.312 Read completed with error (sct=0, sc=8) 00:25:46.312 starting I/O failed 00:25:46.312 Write completed with error (sct=0, sc=8) 00:25:46.312 starting I/O failed 00:25:46.312 Write completed with error (sct=0, sc=8) 00:25:46.312 starting I/O failed 00:25:46.312 Read completed with error (sct=0, sc=8) 00:25:46.312 starting I/O failed 00:25:46.312 Write completed with error (sct=0, sc=8) 00:25:46.312 starting I/O failed 00:25:46.312 Read completed with error (sct=0, sc=8) 00:25:46.312 starting I/O failed 00:25:46.312 Read completed with error (sct=0, sc=8) 00:25:46.312 starting I/O failed 00:25:46.312 Read completed with error (sct=0, sc=8) 00:25:46.312 starting I/O failed 00:25:46.312 [2024-11-15 11:07:34.843889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:46.879 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1565795 Killed "${NVMF_APP[@]}" "$@" 00:25:46.879 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:25:46.879 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:46.879 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:46.879 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:46.879 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:46.879 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1566580 00:25:46.879 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1566580 00:25:46.879 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:46.879 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1566580 ']' 00:25:46.879 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.879 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:46.879 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.879 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:46.879 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:46.879 [2024-11-15 11:07:35.735229] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:25:46.879 [2024-11-15 11:07:35.735280] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:47.137 [2024-11-15 11:07:35.811370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:47.137 Read completed with error (sct=0, sc=8) 00:25:47.137 starting I/O failed 00:25:47.137 Write completed with error (sct=0, sc=8) 00:25:47.137 starting I/O failed 00:25:47.137 Read completed with error (sct=0, sc=8) 00:25:47.137 starting I/O failed 00:25:47.137 Write completed with error (sct=0, sc=8) 00:25:47.137 starting I/O failed 00:25:47.137 Read completed with error (sct=0, sc=8) 00:25:47.137 starting I/O failed 00:25:47.137 Write completed with error (sct=0, sc=8) 00:25:47.137 starting I/O failed 00:25:47.137 Read completed with error (sct=0, sc=8) 00:25:47.137 starting I/O failed 00:25:47.137 Write completed with error (sct=0, sc=8) 00:25:47.137 starting I/O failed 00:25:47.137 Write completed with error (sct=0, sc=8) 00:25:47.137 starting I/O failed 00:25:47.137 Read completed with error (sct=0, sc=8) 00:25:47.137 starting I/O failed 00:25:47.137 Write completed with error (sct=0, sc=8) 00:25:47.137 starting I/O failed 00:25:47.137 Read completed with error (sct=0, sc=8) 00:25:47.137 starting I/O failed 00:25:47.137 Read completed with error (sct=0, sc=8) 00:25:47.137 starting I/O failed 00:25:47.137 Read completed with error (sct=0, sc=8) 00:25:47.137 starting I/O failed 00:25:47.137 Write completed with error (sct=0, sc=8) 00:25:47.137 starting I/O failed 00:25:47.137 Write completed with error (sct=0, sc=8) 00:25:47.137 starting I/O failed 00:25:47.137 Write completed with error (sct=0, sc=8) 00:25:47.137 starting I/O failed 00:25:47.137 Write completed with error (sct=0, sc=8) 00:25:47.137 starting I/O failed 00:25:47.137 Write completed with error (sct=0, sc=8) 00:25:47.137 starting I/O failed 00:25:47.137 Read completed with error (sct=0, sc=8) 00:25:47.137 starting I/O failed 00:25:47.137 Write completed with error (sct=0, sc=8) 00:25:47.137 starting I/O failed 00:25:47.137 Read completed with error (sct=0, sc=8) 00:25:47.137 starting I/O failed 00:25:47.137 Write completed with error (sct=0, sc=8) 00:25:47.138 starting I/O failed 00:25:47.138 Read completed with error (sct=0, sc=8) 00:25:47.138 starting I/O failed 00:25:47.138 Write completed with error (sct=0, sc=8) 00:25:47.138 starting I/O failed 00:25:47.138 Read completed with error (sct=0, sc=8) 00:25:47.138 starting I/O failed 00:25:47.138 Write completed with error (sct=0, sc=8) 00:25:47.138 starting I/O failed 00:25:47.138 Read completed with error (sct=0, sc=8) 00:25:47.138 starting I/O failed 00:25:47.138 Read completed with error (sct=0, sc=8) 00:25:47.138 starting I/O failed 00:25:47.138 Read completed with error (sct=0, sc=8) 00:25:47.138 starting I/O failed 00:25:47.138 Write completed with error (sct=0, sc=8) 00:25:47.138 starting I/O failed 00:25:47.138 Read completed with error (sct=0, sc=8) 00:25:47.138 starting I/O failed 00:25:47.138 [2024-11-15 11:07:35.848356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:47.138 [2024-11-15 11:07:35.853501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:47.138 [2024-11-15 11:07:35.853530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:47.138 [2024-11-15 11:07:35.853537] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:47.138 [2024-11-15 11:07:35.853544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:47.138 [2024-11-15 11:07:35.853549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:47.138 [2024-11-15 11:07:35.855285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:47.138 [2024-11-15 11:07:35.855391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:47.138 [2024-11-15 11:07:35.855497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:47.138 [2024-11-15 11:07:35.855498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:25:47.138 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:47.138 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:25:47.138 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:47.138 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:47.138 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:47.138 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.138 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:47.138 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.138 11:07:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:47.396 Malloc0 00:25:47.396 11:07:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.396 11:07:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:25:47.396 11:07:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.396 11:07:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:47.396 [2024-11-15 11:07:36.057809] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x64f060/0x65afc0) succeed. 00:25:47.396 [2024-11-15 11:07:36.067345] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6506f0/0x69c660) succeed. 00:25:47.396 11:07:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.396 11:07:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:47.396 11:07:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.396 11:07:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:47.396 11:07:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.396 11:07:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:47.396 11:07:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.396 11:07:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:47.396 11:07:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.396 11:07:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:47.396 11:07:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.396 11:07:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:47.396 [2024-11-15 11:07:36.207968] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:47.396 11:07:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.396 11:07:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:25:47.396 11:07:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.396 11:07:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:47.396 11:07:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.396 11:07:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1566029 00:25:48.330 Read completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Write completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Write completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Read completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Read completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Write completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Write completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Write completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Read completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Read completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Read completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Write completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Read completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Read completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Write completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Read completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Write completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Read completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Read completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Write completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Read completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Read completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Write completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Read completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Write completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Read completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Read completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Read completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Write completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Write completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Write completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 Read completed with error (sct=0, sc=8) 00:25:48.330 starting I/O failed 00:25:48.330 [2024-11-15 11:07:36.852903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.330 [2024-11-15 11:07:36.858019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.330 [2024-11-15 11:07:36.858074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.330 [2024-11-15 11:07:36.858095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.330 [2024-11-15 11:07:36.858104] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.330 [2024-11-15 11:07:36.858111] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.330 [2024-11-15 11:07:36.867319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:07:36.877694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.330 [2024-11-15 11:07:36.877740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.330 [2024-11-15 11:07:36.877758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.330 [2024-11-15 11:07:36.877766] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.330 [2024-11-15 11:07:36.877772] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.330 [2024-11-15 11:07:36.887318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:07:36.897797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.330 [2024-11-15 11:07:36.897845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.330 [2024-11-15 11:07:36.897863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.330 [2024-11-15 11:07:36.897870] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.330 [2024-11-15 11:07:36.897877] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.330 [2024-11-15 11:07:36.907550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:07:36.917790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.330 [2024-11-15 11:07:36.917833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.330 [2024-11-15 11:07:36.917850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.330 [2024-11-15 11:07:36.917858] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.330 [2024-11-15 11:07:36.917864] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.330 [2024-11-15 11:07:36.927423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:07:36.937989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.331 [2024-11-15 11:07:36.938039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.331 [2024-11-15 11:07:36.938055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.331 [2024-11-15 11:07:36.938063] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.331 [2024-11-15 11:07:36.938070] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.331 [2024-11-15 11:07:36.947480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:07:36.957936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.331 [2024-11-15 11:07:36.957977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.331 [2024-11-15 11:07:36.957993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.331 [2024-11-15 11:07:36.958001] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.331 [2024-11-15 11:07:36.958008] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.331 [2024-11-15 11:07:36.967573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:07:36.977944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.331 [2024-11-15 11:07:36.977981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.331 [2024-11-15 11:07:36.977999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.331 [2024-11-15 11:07:36.978006] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.331 [2024-11-15 11:07:36.978013] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.331 [2024-11-15 11:07:36.987693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:07:36.997999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.331 [2024-11-15 11:07:36.998043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.331 [2024-11-15 11:07:36.998060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.331 [2024-11-15 11:07:36.998068] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.331 [2024-11-15 11:07:36.998075] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.331 [2024-11-15 11:07:37.007649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:07:37.018196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.331 [2024-11-15 11:07:37.018247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.331 [2024-11-15 11:07:37.018264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.331 [2024-11-15 11:07:37.018271] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.331 [2024-11-15 11:07:37.018278] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.331 [2024-11-15 11:07:37.027802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:07:37.038110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.331 [2024-11-15 11:07:37.038154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.331 [2024-11-15 11:07:37.038175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.331 [2024-11-15 11:07:37.038182] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.331 [2024-11-15 11:07:37.038188] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.331 [2024-11-15 11:07:37.047756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:07:37.058202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.331 [2024-11-15 11:07:37.058246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.331 [2024-11-15 11:07:37.058266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.331 [2024-11-15 11:07:37.058274] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.331 [2024-11-15 11:07:37.058281] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.331 [2024-11-15 11:07:37.067722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:07:37.078305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.331 [2024-11-15 11:07:37.078351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.331 [2024-11-15 11:07:37.078368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.331 [2024-11-15 11:07:37.078376] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.331 [2024-11-15 11:07:37.078383] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.331 [2024-11-15 11:07:37.087926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:07:37.098340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.331 [2024-11-15 11:07:37.098385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.331 [2024-11-15 11:07:37.098403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.331 [2024-11-15 11:07:37.098410] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.331 [2024-11-15 11:07:37.098417] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.331 [2024-11-15 11:07:37.108111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:07:37.118283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.331 [2024-11-15 11:07:37.118328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.331 [2024-11-15 11:07:37.118345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.331 [2024-11-15 11:07:37.118352] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.331 [2024-11-15 11:07:37.118359] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.331 [2024-11-15 11:07:37.127937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:07:37.138435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.331 [2024-11-15 11:07:37.138476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.331 [2024-11-15 11:07:37.138493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.331 [2024-11-15 11:07:37.138503] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.331 [2024-11-15 11:07:37.138510] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.331 [2024-11-15 11:07:37.148158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:07:37.158434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.331 [2024-11-15 11:07:37.158476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.331 [2024-11-15 11:07:37.158493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.331 [2024-11-15 11:07:37.158501] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.332 [2024-11-15 11:07:37.158508] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.332 [2024-11-15 11:07:37.168064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:07:37.178625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.332 [2024-11-15 11:07:37.178676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.332 [2024-11-15 11:07:37.178692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.332 [2024-11-15 11:07:37.178701] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.332 [2024-11-15 11:07:37.178707] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.332 [2024-11-15 11:07:37.188310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:07:37.198476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.332 [2024-11-15 11:07:37.198523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.332 [2024-11-15 11:07:37.198540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.332 [2024-11-15 11:07:37.198547] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.332 [2024-11-15 11:07:37.198554] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.332 [2024-11-15 11:07:37.208240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.589 [2024-11-15 11:07:37.218863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.589 [2024-11-15 11:07:37.218912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.589 [2024-11-15 11:07:37.218935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.589 [2024-11-15 11:07:37.218942] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.589 [2024-11-15 11:07:37.218950] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.589 [2024-11-15 11:07:37.228456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.589 qpair failed and we were unable to recover it. 00:25:48.589 [2024-11-15 11:07:37.238770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.589 [2024-11-15 11:07:37.238815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.589 [2024-11-15 11:07:37.238832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.589 [2024-11-15 11:07:37.238839] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.589 [2024-11-15 11:07:37.238846] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.589 [2024-11-15 11:07:37.248455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.589 qpair failed and we were unable to recover it. 00:25:48.589 [2024-11-15 11:07:37.258921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.589 [2024-11-15 11:07:37.258962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.589 [2024-11-15 11:07:37.258978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.589 [2024-11-15 11:07:37.258986] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.589 [2024-11-15 11:07:37.258993] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.589 [2024-11-15 11:07:37.268551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.589 qpair failed and we were unable to recover it. 00:25:48.589 [2024-11-15 11:07:37.279050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.589 [2024-11-15 11:07:37.279095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.589 [2024-11-15 11:07:37.279112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.589 [2024-11-15 11:07:37.279120] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.589 [2024-11-15 11:07:37.279126] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.589 [2024-11-15 11:07:37.288569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.589 qpair failed and we were unable to recover it. 00:25:48.589 [2024-11-15 11:07:37.298986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.589 [2024-11-15 11:07:37.299032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.589 [2024-11-15 11:07:37.299049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.589 [2024-11-15 11:07:37.299057] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.589 [2024-11-15 11:07:37.299063] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.589 [2024-11-15 11:07:37.308769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.589 qpair failed and we were unable to recover it. 00:25:48.589 [2024-11-15 11:07:37.319016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.589 [2024-11-15 11:07:37.319060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.589 [2024-11-15 11:07:37.319077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.589 [2024-11-15 11:07:37.319084] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.589 [2024-11-15 11:07:37.319091] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.589 [2024-11-15 11:07:37.328771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.589 qpair failed and we were unable to recover it. 00:25:48.589 [2024-11-15 11:07:37.339076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.589 [2024-11-15 11:07:37.339123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.589 [2024-11-15 11:07:37.339140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.589 [2024-11-15 11:07:37.339148] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.589 [2024-11-15 11:07:37.339154] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.589 [2024-11-15 11:07:37.348859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.589 qpair failed and we were unable to recover it. 00:25:48.589 [2024-11-15 11:07:37.359205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.589 [2024-11-15 11:07:37.359254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.589 [2024-11-15 11:07:37.359270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.589 [2024-11-15 11:07:37.359278] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.589 [2024-11-15 11:07:37.359284] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.589 [2024-11-15 11:07:37.368759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.589 qpair failed and we were unable to recover it. 00:25:48.589 [2024-11-15 11:07:37.379191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.590 [2024-11-15 11:07:37.379230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.590 [2024-11-15 11:07:37.379246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.590 [2024-11-15 11:07:37.379254] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.590 [2024-11-15 11:07:37.379260] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.590 [2024-11-15 11:07:37.388843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.590 qpair failed and we were unable to recover it. 00:25:48.590 [2024-11-15 11:07:37.399288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.590 [2024-11-15 11:07:37.399329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.590 [2024-11-15 11:07:37.399349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.590 [2024-11-15 11:07:37.399357] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.590 [2024-11-15 11:07:37.399364] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.590 [2024-11-15 11:07:37.408921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.590 qpair failed and we were unable to recover it. 00:25:48.590 [2024-11-15 11:07:37.419330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.590 [2024-11-15 11:07:37.419375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.590 [2024-11-15 11:07:37.419390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.590 [2024-11-15 11:07:37.419398] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.590 [2024-11-15 11:07:37.419404] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.590 [2024-11-15 11:07:37.429116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.590 qpair failed and we were unable to recover it. 00:25:48.590 [2024-11-15 11:07:37.439429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.590 [2024-11-15 11:07:37.439468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.590 [2024-11-15 11:07:37.439483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.590 [2024-11-15 11:07:37.439491] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.590 [2024-11-15 11:07:37.439497] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.590 [2024-11-15 11:07:37.449156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.590 qpair failed and we were unable to recover it. 00:25:48.590 [2024-11-15 11:07:37.459494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.590 [2024-11-15 11:07:37.459538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.590 [2024-11-15 11:07:37.459554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.590 [2024-11-15 11:07:37.459562] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.590 [2024-11-15 11:07:37.459568] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.590 [2024-11-15 11:07:37.469198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.590 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-15 11:07:37.479554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.848 [2024-11-15 11:07:37.479598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.848 [2024-11-15 11:07:37.479615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.848 [2024-11-15 11:07:37.479626] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.848 [2024-11-15 11:07:37.479633] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.848 [2024-11-15 11:07:37.489178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-15 11:07:37.499732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.848 [2024-11-15 11:07:37.499775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.848 [2024-11-15 11:07:37.499792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.848 [2024-11-15 11:07:37.499799] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.848 [2024-11-15 11:07:37.499806] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.848 [2024-11-15 11:07:37.509174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-15 11:07:37.519712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.848 [2024-11-15 11:07:37.519751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.848 [2024-11-15 11:07:37.519767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.848 [2024-11-15 11:07:37.519775] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.848 [2024-11-15 11:07:37.519781] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.848 [2024-11-15 11:07:37.529305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-15 11:07:37.539830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.848 [2024-11-15 11:07:37.539871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.848 [2024-11-15 11:07:37.539887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.848 [2024-11-15 11:07:37.539895] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.848 [2024-11-15 11:07:37.539901] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.848 [2024-11-15 11:07:37.549416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-15 11:07:37.559916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.848 [2024-11-15 11:07:37.559961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.848 [2024-11-15 11:07:37.559977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.848 [2024-11-15 11:07:37.559985] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.848 [2024-11-15 11:07:37.559992] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.848 [2024-11-15 11:07:37.569511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-15 11:07:37.580033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.848 [2024-11-15 11:07:37.580077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.848 [2024-11-15 11:07:37.580093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.848 [2024-11-15 11:07:37.580101] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.848 [2024-11-15 11:07:37.580108] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.848 [2024-11-15 11:07:37.589586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-15 11:07:37.600012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.848 [2024-11-15 11:07:37.600053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.848 [2024-11-15 11:07:37.600070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.848 [2024-11-15 11:07:37.600078] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.848 [2024-11-15 11:07:37.600084] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.848 [2024-11-15 11:07:37.609622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-15 11:07:37.620121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.848 [2024-11-15 11:07:37.620171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.848 [2024-11-15 11:07:37.620187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.848 [2024-11-15 11:07:37.620195] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.848 [2024-11-15 11:07:37.620202] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.848 [2024-11-15 11:07:37.629672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-15 11:07:37.640095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.848 [2024-11-15 11:07:37.640138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.848 [2024-11-15 11:07:37.640154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.848 [2024-11-15 11:07:37.640166] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.848 [2024-11-15 11:07:37.640173] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.848 [2024-11-15 11:07:37.649721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-15 11:07:37.660252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.848 [2024-11-15 11:07:37.660290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.848 [2024-11-15 11:07:37.660306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.848 [2024-11-15 11:07:37.660313] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.848 [2024-11-15 11:07:37.660320] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.848 [2024-11-15 11:07:37.669892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-15 11:07:37.680243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.849 [2024-11-15 11:07:37.680280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.849 [2024-11-15 11:07:37.680296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.849 [2024-11-15 11:07:37.680304] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.849 [2024-11-15 11:07:37.680310] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.849 [2024-11-15 11:07:37.689958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-15 11:07:37.700285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.849 [2024-11-15 11:07:37.700328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.849 [2024-11-15 11:07:37.700344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.849 [2024-11-15 11:07:37.700352] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.849 [2024-11-15 11:07:37.700358] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.849 [2024-11-15 11:07:37.710002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-15 11:07:37.720344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:48.849 [2024-11-15 11:07:37.720387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:48.849 [2024-11-15 11:07:37.720405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:48.849 [2024-11-15 11:07:37.720412] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:48.849 [2024-11-15 11:07:37.720419] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:48.849 [2024-11-15 11:07:37.729900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.849 qpair failed and we were unable to recover it. 00:25:49.106 [2024-11-15 11:07:37.740309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.106 [2024-11-15 11:07:37.740349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.106 [2024-11-15 11:07:37.740369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.106 [2024-11-15 11:07:37.740376] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.106 [2024-11-15 11:07:37.740383] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.106 [2024-11-15 11:07:37.750139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-11-15 11:07:37.760398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.106 [2024-11-15 11:07:37.760441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.106 [2024-11-15 11:07:37.760458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.106 [2024-11-15 11:07:37.760465] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.106 [2024-11-15 11:07:37.760472] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.106 [2024-11-15 11:07:37.770180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-11-15 11:07:37.780610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.106 [2024-11-15 11:07:37.780655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.106 [2024-11-15 11:07:37.780672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.106 [2024-11-15 11:07:37.780679] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.106 [2024-11-15 11:07:37.780686] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.107 [2024-11-15 11:07:37.790305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-11-15 11:07:37.800543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.107 [2024-11-15 11:07:37.800585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.107 [2024-11-15 11:07:37.800602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.107 [2024-11-15 11:07:37.800609] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.107 [2024-11-15 11:07:37.800616] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.107 [2024-11-15 11:07:37.810356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-11-15 11:07:37.820757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.107 [2024-11-15 11:07:37.820798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.107 [2024-11-15 11:07:37.820815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.107 [2024-11-15 11:07:37.820823] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.107 [2024-11-15 11:07:37.820833] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.107 [2024-11-15 11:07:37.830319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-11-15 11:07:37.840602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.107 [2024-11-15 11:07:37.840642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.107 [2024-11-15 11:07:37.840659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.107 [2024-11-15 11:07:37.840667] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.107 [2024-11-15 11:07:37.840674] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.107 [2024-11-15 11:07:37.850260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-11-15 11:07:37.860733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.107 [2024-11-15 11:07:37.860773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.107 [2024-11-15 11:07:37.860790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.107 [2024-11-15 11:07:37.860798] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.107 [2024-11-15 11:07:37.860804] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.107 [2024-11-15 11:07:37.870373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-11-15 11:07:37.880668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.107 [2024-11-15 11:07:37.880710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.107 [2024-11-15 11:07:37.880727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.107 [2024-11-15 11:07:37.880735] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.107 [2024-11-15 11:07:37.880742] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.107 [2024-11-15 11:07:37.890284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-11-15 11:07:37.900801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.107 [2024-11-15 11:07:37.900846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.107 [2024-11-15 11:07:37.900862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.107 [2024-11-15 11:07:37.900870] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.107 [2024-11-15 11:07:37.900876] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.107 [2024-11-15 11:07:37.910399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-11-15 11:07:37.920828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.107 [2024-11-15 11:07:37.920867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.107 [2024-11-15 11:07:37.920883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.107 [2024-11-15 11:07:37.920891] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.107 [2024-11-15 11:07:37.920897] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.107 [2024-11-15 11:07:37.930681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-11-15 11:07:37.940874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.107 [2024-11-15 11:07:37.940914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.107 [2024-11-15 11:07:37.940930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.107 [2024-11-15 11:07:37.940938] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.107 [2024-11-15 11:07:37.940945] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.107 [2024-11-15 11:07:37.950608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-11-15 11:07:37.960977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.107 [2024-11-15 11:07:37.961021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.107 [2024-11-15 11:07:37.961037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.107 [2024-11-15 11:07:37.961045] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.107 [2024-11-15 11:07:37.961052] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.107 [2024-11-15 11:07:37.970667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-11-15 11:07:37.980951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.107 [2024-11-15 11:07:37.980995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.107 [2024-11-15 11:07:37.981011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.107 [2024-11-15 11:07:37.981019] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.107 [2024-11-15 11:07:37.981025] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.107 [2024-11-15 11:07:37.990663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.365 [2024-11-15 11:07:38.001047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.366 [2024-11-15 11:07:38.001094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.366 [2024-11-15 11:07:38.001110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.366 [2024-11-15 11:07:38.001118] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.366 [2024-11-15 11:07:38.001124] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.366 [2024-11-15 11:07:38.010792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.366 qpair failed and we were unable to recover it. 00:25:49.366 [2024-11-15 11:07:38.021046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.366 [2024-11-15 11:07:38.021090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.366 [2024-11-15 11:07:38.021106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.366 [2024-11-15 11:07:38.021114] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.366 [2024-11-15 11:07:38.021121] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.366 [2024-11-15 11:07:38.030839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.366 qpair failed and we were unable to recover it. 00:25:49.366 [2024-11-15 11:07:38.041269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.366 [2024-11-15 11:07:38.041313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.366 [2024-11-15 11:07:38.041329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.366 [2024-11-15 11:07:38.041337] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.366 [2024-11-15 11:07:38.041344] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.366 [2024-11-15 11:07:38.050877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.366 qpair failed and we were unable to recover it. 00:25:49.366 [2024-11-15 11:07:38.061349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.366 [2024-11-15 11:07:38.061388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.366 [2024-11-15 11:07:38.061405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.366 [2024-11-15 11:07:38.061413] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.366 [2024-11-15 11:07:38.061419] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.366 [2024-11-15 11:07:38.070937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.366 qpair failed and we were unable to recover it. 00:25:49.366 [2024-11-15 11:07:38.081302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.366 [2024-11-15 11:07:38.081343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.366 [2024-11-15 11:07:38.081362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.366 [2024-11-15 11:07:38.081370] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.366 [2024-11-15 11:07:38.081376] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.366 [2024-11-15 11:07:38.090973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.366 qpair failed and we were unable to recover it. 00:25:49.366 [2024-11-15 11:07:38.101346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.366 [2024-11-15 11:07:38.101386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.366 [2024-11-15 11:07:38.101402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.366 [2024-11-15 11:07:38.101410] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.366 [2024-11-15 11:07:38.101416] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.366 [2024-11-15 11:07:38.111073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.366 qpair failed and we were unable to recover it. 00:25:49.366 [2024-11-15 11:07:38.121398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.366 [2024-11-15 11:07:38.121441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.366 [2024-11-15 11:07:38.121457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.366 [2024-11-15 11:07:38.121465] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.366 [2024-11-15 11:07:38.121471] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.366 [2024-11-15 11:07:38.131061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.366 qpair failed and we were unable to recover it. 00:25:49.366 [2024-11-15 11:07:38.141677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.366 [2024-11-15 11:07:38.141721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.366 [2024-11-15 11:07:38.141738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.366 [2024-11-15 11:07:38.141747] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.366 [2024-11-15 11:07:38.141753] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.366 [2024-11-15 11:07:38.151095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.366 qpair failed and we were unable to recover it. 00:25:49.366 [2024-11-15 11:07:38.161493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.366 [2024-11-15 11:07:38.161537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.366 [2024-11-15 11:07:38.161554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.366 [2024-11-15 11:07:38.161562] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.366 [2024-11-15 11:07:38.161573] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.366 [2024-11-15 11:07:38.171261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.366 qpair failed and we were unable to recover it. 00:25:49.366 [2024-11-15 11:07:38.181606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.366 [2024-11-15 11:07:38.181651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.366 [2024-11-15 11:07:38.181668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.366 [2024-11-15 11:07:38.181676] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.366 [2024-11-15 11:07:38.181682] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.366 [2024-11-15 11:07:38.191293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.366 qpair failed and we were unable to recover it. 00:25:49.366 [2024-11-15 11:07:38.201707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.366 [2024-11-15 11:07:38.201753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.366 [2024-11-15 11:07:38.201769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.366 [2024-11-15 11:07:38.201777] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.366 [2024-11-15 11:07:38.201783] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.366 [2024-11-15 11:07:38.211367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.366 qpair failed and we were unable to recover it. 00:25:49.366 [2024-11-15 11:07:38.221710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.366 [2024-11-15 11:07:38.221756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.366 [2024-11-15 11:07:38.221772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.366 [2024-11-15 11:07:38.221780] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.366 [2024-11-15 11:07:38.221787] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.366 [2024-11-15 11:07:38.231360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.366 qpair failed and we were unable to recover it. 00:25:49.366 [2024-11-15 11:07:38.241823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.366 [2024-11-15 11:07:38.241865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.366 [2024-11-15 11:07:38.241882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.366 [2024-11-15 11:07:38.241889] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.366 [2024-11-15 11:07:38.241896] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.624 [2024-11-15 11:07:38.251578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.624 qpair failed and we were unable to recover it. 00:25:49.624 [2024-11-15 11:07:38.261812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.624 [2024-11-15 11:07:38.261856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.624 [2024-11-15 11:07:38.261872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.624 [2024-11-15 11:07:38.261880] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.624 [2024-11-15 11:07:38.261886] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.624 [2024-11-15 11:07:38.271327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.624 qpair failed and we were unable to recover it. 00:25:49.624 [2024-11-15 11:07:38.281831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.624 [2024-11-15 11:07:38.281871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.624 [2024-11-15 11:07:38.281888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.624 [2024-11-15 11:07:38.281896] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.624 [2024-11-15 11:07:38.281903] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.624 [2024-11-15 11:07:38.291441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.624 qpair failed and we were unable to recover it. 00:25:49.624 [2024-11-15 11:07:38.301938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.624 [2024-11-15 11:07:38.301980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.624 [2024-11-15 11:07:38.302003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.624 [2024-11-15 11:07:38.302011] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.624 [2024-11-15 11:07:38.302017] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.624 [2024-11-15 11:07:38.311459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.624 qpair failed and we were unable to recover it. 00:25:49.624 [2024-11-15 11:07:38.321960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.624 [2024-11-15 11:07:38.321997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.624 [2024-11-15 11:07:38.322013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.624 [2024-11-15 11:07:38.322020] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.624 [2024-11-15 11:07:38.322027] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.624 [2024-11-15 11:07:38.331542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.624 qpair failed and we were unable to recover it. 00:25:49.624 [2024-11-15 11:07:38.342019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.624 [2024-11-15 11:07:38.342066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.624 [2024-11-15 11:07:38.342082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.624 [2024-11-15 11:07:38.342090] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.624 [2024-11-15 11:07:38.342096] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.624 [2024-11-15 11:07:38.351662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.624 qpair failed and we were unable to recover it. 00:25:49.624 [2024-11-15 11:07:38.362110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.624 [2024-11-15 11:07:38.362150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.624 [2024-11-15 11:07:38.362172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.624 [2024-11-15 11:07:38.362180] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.624 [2024-11-15 11:07:38.362186] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.624 [2024-11-15 11:07:38.371550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.624 qpair failed and we were unable to recover it. 00:25:49.624 [2024-11-15 11:07:38.382105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.624 [2024-11-15 11:07:38.382150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.624 [2024-11-15 11:07:38.382171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.624 [2024-11-15 11:07:38.382180] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.624 [2024-11-15 11:07:38.382187] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.624 [2024-11-15 11:07:38.391736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.624 qpair failed and we were unable to recover it. 00:25:49.624 [2024-11-15 11:07:38.402222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.624 [2024-11-15 11:07:38.402264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.624 [2024-11-15 11:07:38.402281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.624 [2024-11-15 11:07:38.402289] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.624 [2024-11-15 11:07:38.402296] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.624 [2024-11-15 11:07:38.411759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.624 qpair failed and we were unable to recover it. 00:25:49.624 [2024-11-15 11:07:38.422257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.624 [2024-11-15 11:07:38.422301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.624 [2024-11-15 11:07:38.422317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.625 [2024-11-15 11:07:38.422329] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.625 [2024-11-15 11:07:38.422336] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.625 [2024-11-15 11:07:38.431821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.625 qpair failed and we were unable to recover it. 00:25:49.625 [2024-11-15 11:07:38.442283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.625 [2024-11-15 11:07:38.442326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.625 [2024-11-15 11:07:38.442342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.625 [2024-11-15 11:07:38.442350] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.625 [2024-11-15 11:07:38.442357] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.625 [2024-11-15 11:07:38.451935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.625 qpair failed and we were unable to recover it. 00:25:49.625 [2024-11-15 11:07:38.462354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.625 [2024-11-15 11:07:38.462401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.625 [2024-11-15 11:07:38.462417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.625 [2024-11-15 11:07:38.462425] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.625 [2024-11-15 11:07:38.462431] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.625 [2024-11-15 11:07:38.471882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.625 qpair failed and we were unable to recover it. 00:25:49.625 [2024-11-15 11:07:38.482463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.625 [2024-11-15 11:07:38.482510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.625 [2024-11-15 11:07:38.482527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.625 [2024-11-15 11:07:38.482535] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.625 [2024-11-15 11:07:38.482541] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.625 [2024-11-15 11:07:38.492120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.625 qpair failed and we were unable to recover it. 00:25:49.625 [2024-11-15 11:07:38.502499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.625 [2024-11-15 11:07:38.502538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.625 [2024-11-15 11:07:38.502554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.625 [2024-11-15 11:07:38.502562] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.625 [2024-11-15 11:07:38.502572] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.883 [2024-11-15 11:07:38.512123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-15 11:07:38.522467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.883 [2024-11-15 11:07:38.522509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.883 [2024-11-15 11:07:38.522525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.883 [2024-11-15 11:07:38.522533] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.883 [2024-11-15 11:07:38.522540] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.883 [2024-11-15 11:07:38.532085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-15 11:07:38.542561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.883 [2024-11-15 11:07:38.542608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.883 [2024-11-15 11:07:38.542625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.883 [2024-11-15 11:07:38.542633] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.883 [2024-11-15 11:07:38.542640] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.883 [2024-11-15 11:07:38.552074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-15 11:07:38.562666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.883 [2024-11-15 11:07:38.562710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.883 [2024-11-15 11:07:38.562727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.883 [2024-11-15 11:07:38.562734] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.883 [2024-11-15 11:07:38.562741] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.883 [2024-11-15 11:07:38.572200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-15 11:07:38.582795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.883 [2024-11-15 11:07:38.582837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.883 [2024-11-15 11:07:38.582854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.883 [2024-11-15 11:07:38.582861] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.883 [2024-11-15 11:07:38.582868] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.883 [2024-11-15 11:07:38.592311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-15 11:07:38.602705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.883 [2024-11-15 11:07:38.602748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.883 [2024-11-15 11:07:38.602764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.883 [2024-11-15 11:07:38.602772] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.883 [2024-11-15 11:07:38.602779] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.883 [2024-11-15 11:07:38.612440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-15 11:07:38.622902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.883 [2024-11-15 11:07:38.622944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.883 [2024-11-15 11:07:38.622961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.883 [2024-11-15 11:07:38.622969] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.883 [2024-11-15 11:07:38.622975] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.883 [2024-11-15 11:07:38.632507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-15 11:07:38.642937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.883 [2024-11-15 11:07:38.642979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.883 [2024-11-15 11:07:38.642995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.883 [2024-11-15 11:07:38.643003] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.883 [2024-11-15 11:07:38.643010] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.883 [2024-11-15 11:07:38.652445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-15 11:07:38.662949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.883 [2024-11-15 11:07:38.662987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.883 [2024-11-15 11:07:38.663004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.883 [2024-11-15 11:07:38.663013] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.883 [2024-11-15 11:07:38.663020] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.883 [2024-11-15 11:07:38.672533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-15 11:07:38.683083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.883 [2024-11-15 11:07:38.683128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.884 [2024-11-15 11:07:38.683148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.884 [2024-11-15 11:07:38.683156] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.884 [2024-11-15 11:07:38.683168] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.884 [2024-11-15 11:07:38.692618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-15 11:07:38.703095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.884 [2024-11-15 11:07:38.703141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.884 [2024-11-15 11:07:38.703157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.884 [2024-11-15 11:07:38.703177] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.884 [2024-11-15 11:07:38.703185] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.884 [2024-11-15 11:07:38.712662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-15 11:07:38.723085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.884 [2024-11-15 11:07:38.723125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.884 [2024-11-15 11:07:38.723141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.884 [2024-11-15 11:07:38.723149] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.884 [2024-11-15 11:07:38.723156] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.884 [2024-11-15 11:07:38.732580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-15 11:07:38.743216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.884 [2024-11-15 11:07:38.743260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.884 [2024-11-15 11:07:38.743276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.884 [2024-11-15 11:07:38.743284] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.884 [2024-11-15 11:07:38.743290] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:49.884 [2024-11-15 11:07:38.752670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-15 11:07:38.763225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:49.884 [2024-11-15 11:07:38.763266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:49.884 [2024-11-15 11:07:38.763283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:49.884 [2024-11-15 11:07:38.763294] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:49.884 [2024-11-15 11:07:38.763300] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.142 [2024-11-15 11:07:38.772756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-11-15 11:07:38.783331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.142 [2024-11-15 11:07:38.783375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.142 [2024-11-15 11:07:38.783391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.142 [2024-11-15 11:07:38.783399] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.142 [2024-11-15 11:07:38.783405] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.142 [2024-11-15 11:07:38.792884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-11-15 11:07:38.803387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.142 [2024-11-15 11:07:38.803425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.142 [2024-11-15 11:07:38.803441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.142 [2024-11-15 11:07:38.803449] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.142 [2024-11-15 11:07:38.803456] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.142 [2024-11-15 11:07:38.812996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-11-15 11:07:38.823454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.142 [2024-11-15 11:07:38.823496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.142 [2024-11-15 11:07:38.823513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.142 [2024-11-15 11:07:38.823521] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.142 [2024-11-15 11:07:38.823528] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.142 [2024-11-15 11:07:38.832987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-11-15 11:07:38.843489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.142 [2024-11-15 11:07:38.843532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.142 [2024-11-15 11:07:38.843550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.142 [2024-11-15 11:07:38.843557] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.142 [2024-11-15 11:07:38.843564] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.142 [2024-11-15 11:07:38.853138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-11-15 11:07:38.863574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.142 [2024-11-15 11:07:38.863619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.142 [2024-11-15 11:07:38.863636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.142 [2024-11-15 11:07:38.863643] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.142 [2024-11-15 11:07:38.863650] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.142 [2024-11-15 11:07:38.873092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-11-15 11:07:38.883531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.142 [2024-11-15 11:07:38.883578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.142 [2024-11-15 11:07:38.883594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.142 [2024-11-15 11:07:38.883602] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.142 [2024-11-15 11:07:38.883608] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.142 [2024-11-15 11:07:38.893271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-11-15 11:07:38.903655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.142 [2024-11-15 11:07:38.903697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.142 [2024-11-15 11:07:38.903713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.142 [2024-11-15 11:07:38.903720] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.142 [2024-11-15 11:07:38.903727] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.142 [2024-11-15 11:07:38.913332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-11-15 11:07:38.923706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.142 [2024-11-15 11:07:38.923748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.142 [2024-11-15 11:07:38.923765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.142 [2024-11-15 11:07:38.923773] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.142 [2024-11-15 11:07:38.923779] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.142 [2024-11-15 11:07:38.933284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-11-15 11:07:38.943869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.142 [2024-11-15 11:07:38.943911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.142 [2024-11-15 11:07:38.943929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.142 [2024-11-15 11:07:38.943937] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.142 [2024-11-15 11:07:38.943943] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.142 [2024-11-15 11:07:38.953369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-11-15 11:07:38.963910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.142 [2024-11-15 11:07:38.963956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.142 [2024-11-15 11:07:38.963972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.142 [2024-11-15 11:07:38.963980] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.142 [2024-11-15 11:07:38.963986] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.142 [2024-11-15 11:07:38.973417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-11-15 11:07:38.983958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.142 [2024-11-15 11:07:38.983998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.143 [2024-11-15 11:07:38.984015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.143 [2024-11-15 11:07:38.984022] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.143 [2024-11-15 11:07:38.984029] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.143 [2024-11-15 11:07:38.993543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-11-15 11:07:39.003895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.143 [2024-11-15 11:07:39.003934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.143 [2024-11-15 11:07:39.003951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.143 [2024-11-15 11:07:39.003958] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.143 [2024-11-15 11:07:39.003965] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.143 [2024-11-15 11:07:39.013539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-11-15 11:07:39.024062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.143 [2024-11-15 11:07:39.024107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.143 [2024-11-15 11:07:39.024126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.143 [2024-11-15 11:07:39.024134] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.143 [2024-11-15 11:07:39.024141] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.401 [2024-11-15 11:07:39.033531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.401 qpair failed and we were unable to recover it. 00:25:50.401 [2024-11-15 11:07:39.044111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.401 [2024-11-15 11:07:39.044151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.401 [2024-11-15 11:07:39.044172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.401 [2024-11-15 11:07:39.044180] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.401 [2024-11-15 11:07:39.044186] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.401 [2024-11-15 11:07:39.053749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.401 qpair failed and we were unable to recover it. 00:25:50.401 [2024-11-15 11:07:39.064150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.401 [2024-11-15 11:07:39.064198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.401 [2024-11-15 11:07:39.064215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.401 [2024-11-15 11:07:39.064223] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.401 [2024-11-15 11:07:39.064229] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.401 [2024-11-15 11:07:39.073737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.401 qpair failed and we were unable to recover it. 00:25:50.401 [2024-11-15 11:07:39.084233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.401 [2024-11-15 11:07:39.084273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.401 [2024-11-15 11:07:39.084290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.401 [2024-11-15 11:07:39.084297] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.401 [2024-11-15 11:07:39.084304] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.402 [2024-11-15 11:07:39.093768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.402 qpair failed and we were unable to recover it. 00:25:50.402 [2024-11-15 11:07:39.104260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.402 [2024-11-15 11:07:39.104301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.402 [2024-11-15 11:07:39.104317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.402 [2024-11-15 11:07:39.104328] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.402 [2024-11-15 11:07:39.104335] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.402 [2024-11-15 11:07:39.113880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.402 qpair failed and we were unable to recover it. 00:25:50.402 [2024-11-15 11:07:39.124422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.402 [2024-11-15 11:07:39.124462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.402 [2024-11-15 11:07:39.124479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.402 [2024-11-15 11:07:39.124487] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.402 [2024-11-15 11:07:39.124493] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.402 [2024-11-15 11:07:39.133953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.402 qpair failed and we were unable to recover it. 00:25:50.402 [2024-11-15 11:07:39.144400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.402 [2024-11-15 11:07:39.144445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.402 [2024-11-15 11:07:39.144461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.402 [2024-11-15 11:07:39.144469] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.402 [2024-11-15 11:07:39.144475] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.402 [2024-11-15 11:07:39.154017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.402 qpair failed and we were unable to recover it. 00:25:50.402 [2024-11-15 11:07:39.164518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.402 [2024-11-15 11:07:39.164560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.402 [2024-11-15 11:07:39.164577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.402 [2024-11-15 11:07:39.164584] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.402 [2024-11-15 11:07:39.164591] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.402 [2024-11-15 11:07:39.174038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.402 qpair failed and we were unable to recover it. 00:25:50.402 [2024-11-15 11:07:39.184543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.402 [2024-11-15 11:07:39.184586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.402 [2024-11-15 11:07:39.184601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.402 [2024-11-15 11:07:39.184609] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.402 [2024-11-15 11:07:39.184616] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.402 [2024-11-15 11:07:39.194168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.402 qpair failed and we were unable to recover it. 00:25:50.402 [2024-11-15 11:07:39.204499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.402 [2024-11-15 11:07:39.204540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.402 [2024-11-15 11:07:39.204556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.402 [2024-11-15 11:07:39.204563] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.402 [2024-11-15 11:07:39.204570] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.402 [2024-11-15 11:07:39.214103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.402 qpair failed and we were unable to recover it. 00:25:50.402 [2024-11-15 11:07:39.224683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.402 [2024-11-15 11:07:39.224729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.402 [2024-11-15 11:07:39.224746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.402 [2024-11-15 11:07:39.224753] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.402 [2024-11-15 11:07:39.224759] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.402 [2024-11-15 11:07:39.234213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.402 qpair failed and we were unable to recover it. 00:25:50.402 [2024-11-15 11:07:39.244652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.402 [2024-11-15 11:07:39.244696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.402 [2024-11-15 11:07:39.244712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.402 [2024-11-15 11:07:39.244720] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.402 [2024-11-15 11:07:39.244726] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.402 [2024-11-15 11:07:39.254421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.402 qpair failed and we were unable to recover it. 00:25:50.402 [2024-11-15 11:07:39.264747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.402 [2024-11-15 11:07:39.264796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.402 [2024-11-15 11:07:39.264812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.402 [2024-11-15 11:07:39.264820] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.402 [2024-11-15 11:07:39.264826] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.402 [2024-11-15 11:07:39.274498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.403 qpair failed and we were unable to recover it. 00:25:50.403 [2024-11-15 11:07:39.284970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.403 [2024-11-15 11:07:39.285014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.403 [2024-11-15 11:07:39.285030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.403 [2024-11-15 11:07:39.285038] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.403 [2024-11-15 11:07:39.285044] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.660 [2024-11-15 11:07:39.294679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:07:39.305011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.660 [2024-11-15 11:07:39.305053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.660 [2024-11-15 11:07:39.305070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.660 [2024-11-15 11:07:39.305078] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.660 [2024-11-15 11:07:39.305084] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.660 [2024-11-15 11:07:39.314602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:07:39.325015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.661 [2024-11-15 11:07:39.325056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.661 [2024-11-15 11:07:39.325073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.661 [2024-11-15 11:07:39.325081] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.661 [2024-11-15 11:07:39.325088] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.661 [2024-11-15 11:07:39.334734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:07:39.345079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.661 [2024-11-15 11:07:39.345121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.661 [2024-11-15 11:07:39.345137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.661 [2024-11-15 11:07:39.345145] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.661 [2024-11-15 11:07:39.345151] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.661 [2024-11-15 11:07:39.354830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:07:39.365082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.661 [2024-11-15 11:07:39.365121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.661 [2024-11-15 11:07:39.365142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.661 [2024-11-15 11:07:39.365150] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.661 [2024-11-15 11:07:39.365156] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.661 [2024-11-15 11:07:39.374716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:07:39.385192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.661 [2024-11-15 11:07:39.385233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.661 [2024-11-15 11:07:39.385249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.661 [2024-11-15 11:07:39.385257] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.661 [2024-11-15 11:07:39.385263] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.661 [2024-11-15 11:07:39.394950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:07:39.405191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.661 [2024-11-15 11:07:39.405233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.661 [2024-11-15 11:07:39.405250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.661 [2024-11-15 11:07:39.405257] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.661 [2024-11-15 11:07:39.405264] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.661 [2024-11-15 11:07:39.414984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:07:39.425270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.661 [2024-11-15 11:07:39.425319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.661 [2024-11-15 11:07:39.425336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.661 [2024-11-15 11:07:39.425344] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.661 [2024-11-15 11:07:39.425350] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.661 [2024-11-15 11:07:39.435013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:07:39.445436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.661 [2024-11-15 11:07:39.445478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.661 [2024-11-15 11:07:39.445495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.661 [2024-11-15 11:07:39.445502] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.661 [2024-11-15 11:07:39.445512] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.661 [2024-11-15 11:07:39.455103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:07:39.465459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.661 [2024-11-15 11:07:39.465505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.661 [2024-11-15 11:07:39.465521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.661 [2024-11-15 11:07:39.465529] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.661 [2024-11-15 11:07:39.465535] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.661 [2024-11-15 11:07:39.475119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:07:39.485506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.661 [2024-11-15 11:07:39.485550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.661 [2024-11-15 11:07:39.485566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.661 [2024-11-15 11:07:39.485573] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.661 [2024-11-15 11:07:39.485580] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.661 [2024-11-15 11:07:39.495223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:07:39.505593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.661 [2024-11-15 11:07:39.505640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.661 [2024-11-15 11:07:39.505656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.661 [2024-11-15 11:07:39.505664] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.661 [2024-11-15 11:07:39.505670] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.661 [2024-11-15 11:07:39.515284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:07:39.525828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.661 [2024-11-15 11:07:39.525872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.661 [2024-11-15 11:07:39.525889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.661 [2024-11-15 11:07:39.525897] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.661 [2024-11-15 11:07:39.525903] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.661 [2024-11-15 11:07:39.535297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.920 [2024-11-15 11:07:39.545854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.920 [2024-11-15 11:07:39.545897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.920 [2024-11-15 11:07:39.545913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.920 [2024-11-15 11:07:39.545921] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.920 [2024-11-15 11:07:39.545928] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.920 [2024-11-15 11:07:39.555384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.920 qpair failed and we were unable to recover it. 00:25:50.920 [2024-11-15 11:07:39.565733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.920 [2024-11-15 11:07:39.565776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.920 [2024-11-15 11:07:39.565792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.920 [2024-11-15 11:07:39.565799] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.920 [2024-11-15 11:07:39.565806] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.920 [2024-11-15 11:07:39.575504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.920 qpair failed and we were unable to recover it. 00:25:50.920 [2024-11-15 11:07:39.585837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.920 [2024-11-15 11:07:39.585886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.920 [2024-11-15 11:07:39.585902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.920 [2024-11-15 11:07:39.585910] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.920 [2024-11-15 11:07:39.585918] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.920 [2024-11-15 11:07:39.595632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.920 qpair failed and we were unable to recover it. 00:25:50.920 [2024-11-15 11:07:39.605952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.920 [2024-11-15 11:07:39.605990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.920 [2024-11-15 11:07:39.606005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.920 [2024-11-15 11:07:39.606013] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.920 [2024-11-15 11:07:39.606020] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.920 [2024-11-15 11:07:39.615624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.920 qpair failed and we were unable to recover it. 00:25:50.920 [2024-11-15 11:07:39.625956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.920 [2024-11-15 11:07:39.625998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.920 [2024-11-15 11:07:39.626014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.920 [2024-11-15 11:07:39.626022] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.920 [2024-11-15 11:07:39.626028] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.920 [2024-11-15 11:07:39.635656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.920 qpair failed and we were unable to recover it. 00:25:50.920 [2024-11-15 11:07:39.646028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.920 [2024-11-15 11:07:39.646071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.920 [2024-11-15 11:07:39.646086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.920 [2024-11-15 11:07:39.646094] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.920 [2024-11-15 11:07:39.646100] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.920 [2024-11-15 11:07:39.655831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.920 qpair failed and we were unable to recover it. 00:25:50.920 [2024-11-15 11:07:39.665984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.920 [2024-11-15 11:07:39.666032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.920 [2024-11-15 11:07:39.666048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.920 [2024-11-15 11:07:39.666055] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.920 [2024-11-15 11:07:39.666062] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.920 [2024-11-15 11:07:39.675817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.920 qpair failed and we were unable to recover it. 00:25:50.920 [2024-11-15 11:07:39.686031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.920 [2024-11-15 11:07:39.686068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.920 [2024-11-15 11:07:39.686084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.920 [2024-11-15 11:07:39.686091] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.920 [2024-11-15 11:07:39.686098] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.920 [2024-11-15 11:07:39.695840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.920 qpair failed and we were unable to recover it. 00:25:50.920 [2024-11-15 11:07:39.706143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.920 [2024-11-15 11:07:39.706189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.920 [2024-11-15 11:07:39.706209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.920 [2024-11-15 11:07:39.706217] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.920 [2024-11-15 11:07:39.706224] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.920 [2024-11-15 11:07:39.715887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.920 qpair failed and we were unable to recover it. 00:25:50.920 [2024-11-15 11:07:39.726170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.920 [2024-11-15 11:07:39.726214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.921 [2024-11-15 11:07:39.726231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.921 [2024-11-15 11:07:39.726238] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.921 [2024-11-15 11:07:39.726245] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.921 [2024-11-15 11:07:39.736081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.921 qpair failed and we were unable to recover it. 00:25:50.921 [2024-11-15 11:07:39.746303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.921 [2024-11-15 11:07:39.746349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.921 [2024-11-15 11:07:39.746365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.921 [2024-11-15 11:07:39.746373] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.921 [2024-11-15 11:07:39.746379] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.921 [2024-11-15 11:07:39.756027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.921 qpair failed and we were unable to recover it. 00:25:50.921 [2024-11-15 11:07:39.766312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.921 [2024-11-15 11:07:39.766356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.921 [2024-11-15 11:07:39.766373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.921 [2024-11-15 11:07:39.766380] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.921 [2024-11-15 11:07:39.766387] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.921 [2024-11-15 11:07:39.775995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.921 qpair failed and we were unable to recover it. 00:25:50.921 [2024-11-15 11:07:39.786493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.921 [2024-11-15 11:07:39.786540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.921 [2024-11-15 11:07:39.786557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.921 [2024-11-15 11:07:39.786565] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.921 [2024-11-15 11:07:39.786575] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:50.921 [2024-11-15 11:07:39.796165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:50.921 qpair failed and we were unable to recover it. 00:25:51.178 [2024-11-15 11:07:39.806575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.178 [2024-11-15 11:07:39.806618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.178 [2024-11-15 11:07:39.806634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.178 [2024-11-15 11:07:39.806642] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.178 [2024-11-15 11:07:39.806648] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.178 [2024-11-15 11:07:39.816193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.178 qpair failed and we were unable to recover it. 00:25:51.178 [2024-11-15 11:07:39.826686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.178 [2024-11-15 11:07:39.826730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.178 [2024-11-15 11:07:39.826747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.178 [2024-11-15 11:07:39.826754] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.178 [2024-11-15 11:07:39.826761] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.178 [2024-11-15 11:07:39.836292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.178 qpair failed and we were unable to recover it. 00:25:51.178 [2024-11-15 11:07:39.846699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.178 [2024-11-15 11:07:39.846739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.178 [2024-11-15 11:07:39.846755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.178 [2024-11-15 11:07:39.846763] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.178 [2024-11-15 11:07:39.846769] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.178 [2024-11-15 11:07:39.856366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.178 qpair failed and we were unable to recover it. 00:25:51.178 [2024-11-15 11:07:39.866669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.178 [2024-11-15 11:07:39.866714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.178 [2024-11-15 11:07:39.866731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.178 [2024-11-15 11:07:39.866738] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.178 [2024-11-15 11:07:39.866745] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.178 [2024-11-15 11:07:39.876289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.178 qpair failed and we were unable to recover it. 00:25:51.178 [2024-11-15 11:07:39.886757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.178 [2024-11-15 11:07:39.886803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.178 [2024-11-15 11:07:39.886820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.178 [2024-11-15 11:07:39.886827] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.178 [2024-11-15 11:07:39.886834] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.178 [2024-11-15 11:07:39.896284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.178 qpair failed and we were unable to recover it. 00:25:51.178 [2024-11-15 11:07:39.906788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.178 [2024-11-15 11:07:39.906835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.178 [2024-11-15 11:07:39.906851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.179 [2024-11-15 11:07:39.906859] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.179 [2024-11-15 11:07:39.906866] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.179 [2024-11-15 11:07:39.916501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.179 qpair failed and we were unable to recover it. 00:25:51.179 [2024-11-15 11:07:39.926883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.179 [2024-11-15 11:07:39.926922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.179 [2024-11-15 11:07:39.926939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.179 [2024-11-15 11:07:39.926946] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.179 [2024-11-15 11:07:39.926953] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.179 [2024-11-15 11:07:39.936562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.179 qpair failed and we were unable to recover it. 00:25:51.179 [2024-11-15 11:07:39.946949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.179 [2024-11-15 11:07:39.946992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.179 [2024-11-15 11:07:39.947008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.179 [2024-11-15 11:07:39.947016] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.179 [2024-11-15 11:07:39.947022] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.179 [2024-11-15 11:07:39.956506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.179 qpair failed and we were unable to recover it. 00:25:51.179 [2024-11-15 11:07:39.967035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.179 [2024-11-15 11:07:39.967080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.179 [2024-11-15 11:07:39.967096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.179 [2024-11-15 11:07:39.967104] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.179 [2024-11-15 11:07:39.967110] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.179 [2024-11-15 11:07:39.976627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.179 qpair failed and we were unable to recover it. 00:25:51.179 [2024-11-15 11:07:39.987023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.179 [2024-11-15 11:07:39.987063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.179 [2024-11-15 11:07:39.987080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.179 [2024-11-15 11:07:39.987088] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.179 [2024-11-15 11:07:39.987094] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.179 [2024-11-15 11:07:39.996485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.179 qpair failed and we were unable to recover it. 00:25:51.179 [2024-11-15 11:07:40.007052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.179 [2024-11-15 11:07:40.007104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.179 [2024-11-15 11:07:40.007129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.179 [2024-11-15 11:07:40.007138] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.179 [2024-11-15 11:07:40.007146] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.179 [2024-11-15 11:07:40.016634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.179 qpair failed and we were unable to recover it. 00:25:51.179 [2024-11-15 11:07:40.026885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.179 [2024-11-15 11:07:40.026925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.179 [2024-11-15 11:07:40.026943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.179 [2024-11-15 11:07:40.026950] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.179 [2024-11-15 11:07:40.026957] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.179 [2024-11-15 11:07:40.036696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.179 qpair failed and we were unable to recover it. 00:25:51.179 [2024-11-15 11:07:40.047039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.179 [2024-11-15 11:07:40.047083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.179 [2024-11-15 11:07:40.047100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.179 [2024-11-15 11:07:40.047111] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.179 [2024-11-15 11:07:40.047118] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.179 [2024-11-15 11:07:40.056767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.179 qpair failed and we were unable to recover it. 00:25:51.436 [2024-11-15 11:07:40.067118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.436 [2024-11-15 11:07:40.067160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.436 [2024-11-15 11:07:40.067183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.436 [2024-11-15 11:07:40.067192] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.436 [2024-11-15 11:07:40.067198] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.436 [2024-11-15 11:07:40.076870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.436 qpair failed and we were unable to recover it. 00:25:51.436 [2024-11-15 11:07:40.087312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.436 [2024-11-15 11:07:40.087350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.436 [2024-11-15 11:07:40.087367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.436 [2024-11-15 11:07:40.087375] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.436 [2024-11-15 11:07:40.087382] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.436 [2024-11-15 11:07:40.096919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.436 qpair failed and we were unable to recover it. 00:25:51.436 [2024-11-15 11:07:40.107236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.436 [2024-11-15 11:07:40.107277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.436 [2024-11-15 11:07:40.107294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.436 [2024-11-15 11:07:40.107302] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.436 [2024-11-15 11:07:40.107308] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.436 [2024-11-15 11:07:40.116810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.436 qpair failed and we were unable to recover it. 00:25:51.436 [2024-11-15 11:07:40.127704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.436 [2024-11-15 11:07:40.127746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.436 [2024-11-15 11:07:40.127763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.436 [2024-11-15 11:07:40.127771] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.436 [2024-11-15 11:07:40.127781] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.436 [2024-11-15 11:07:40.136992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.436 qpair failed and we were unable to recover it. 00:25:51.436 [2024-11-15 11:07:40.147452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.436 [2024-11-15 11:07:40.147497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.436 [2024-11-15 11:07:40.147514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.436 [2024-11-15 11:07:40.147522] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.436 [2024-11-15 11:07:40.147528] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.436 [2024-11-15 11:07:40.157097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.436 qpair failed and we were unable to recover it. 00:25:51.436 [2024-11-15 11:07:40.167470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.436 [2024-11-15 11:07:40.167511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.437 [2024-11-15 11:07:40.167528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.437 [2024-11-15 11:07:40.167535] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.437 [2024-11-15 11:07:40.167542] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.437 [2024-11-15 11:07:40.177144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.437 qpair failed and we were unable to recover it. 00:25:51.437 [2024-11-15 11:07:40.187643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.437 [2024-11-15 11:07:40.187687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.437 [2024-11-15 11:07:40.187704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.437 [2024-11-15 11:07:40.187712] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.437 [2024-11-15 11:07:40.187719] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.437 [2024-11-15 11:07:40.197213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.437 qpair failed and we were unable to recover it. 00:25:51.437 [2024-11-15 11:07:40.207685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.437 [2024-11-15 11:07:40.207728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.437 [2024-11-15 11:07:40.207745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.437 [2024-11-15 11:07:40.207753] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.437 [2024-11-15 11:07:40.207760] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.437 [2024-11-15 11:07:40.217195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.437 qpair failed and we were unable to recover it. 00:25:51.437 [2024-11-15 11:07:40.227861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.437 [2024-11-15 11:07:40.227905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.437 [2024-11-15 11:07:40.227923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.437 [2024-11-15 11:07:40.227930] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.437 [2024-11-15 11:07:40.227937] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.437 [2024-11-15 11:07:40.237310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.437 qpair failed and we were unable to recover it. 00:25:51.437 [2024-11-15 11:07:40.247842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.437 [2024-11-15 11:07:40.247886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.437 [2024-11-15 11:07:40.247902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.437 [2024-11-15 11:07:40.247909] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.437 [2024-11-15 11:07:40.247916] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.437 [2024-11-15 11:07:40.257251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.437 qpair failed and we were unable to recover it. 00:25:51.437 [2024-11-15 11:07:40.267822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.437 [2024-11-15 11:07:40.267866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.437 [2024-11-15 11:07:40.267882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.437 [2024-11-15 11:07:40.267889] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.437 [2024-11-15 11:07:40.267896] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.437 [2024-11-15 11:07:40.277385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.437 qpair failed and we were unable to recover it. 00:25:51.437 [2024-11-15 11:07:40.287923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.437 [2024-11-15 11:07:40.287968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.437 [2024-11-15 11:07:40.287984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.437 [2024-11-15 11:07:40.287992] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.437 [2024-11-15 11:07:40.287998] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.437 [2024-11-15 11:07:40.297589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.437 qpair failed and we were unable to recover it. 00:25:51.437 [2024-11-15 11:07:40.307966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.437 [2024-11-15 11:07:40.308007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.437 [2024-11-15 11:07:40.308027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.437 [2024-11-15 11:07:40.308035] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.437 [2024-11-15 11:07:40.308041] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.437 [2024-11-15 11:07:40.317687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.437 qpair failed and we were unable to recover it. 00:25:51.695 [2024-11-15 11:07:40.328022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.695 [2024-11-15 11:07:40.328066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.695 [2024-11-15 11:07:40.328082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.695 [2024-11-15 11:07:40.328090] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.695 [2024-11-15 11:07:40.328096] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.695 [2024-11-15 11:07:40.337528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.695 qpair failed and we were unable to recover it. 00:25:51.695 [2024-11-15 11:07:40.348045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.695 [2024-11-15 11:07:40.348087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.695 [2024-11-15 11:07:40.348103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.695 [2024-11-15 11:07:40.348111] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.695 [2024-11-15 11:07:40.348118] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.695 [2024-11-15 11:07:40.357622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.695 qpair failed and we were unable to recover it. 00:25:51.695 [2024-11-15 11:07:40.368092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.695 [2024-11-15 11:07:40.368135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.695 [2024-11-15 11:07:40.368151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.695 [2024-11-15 11:07:40.368158] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.695 [2024-11-15 11:07:40.368177] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.695 [2024-11-15 11:07:40.377684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.695 qpair failed and we were unable to recover it. 00:25:51.695 [2024-11-15 11:07:40.388105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.695 [2024-11-15 11:07:40.388150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.695 [2024-11-15 11:07:40.388177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.695 [2024-11-15 11:07:40.388189] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.695 [2024-11-15 11:07:40.388196] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.695 [2024-11-15 11:07:40.397707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.695 qpair failed and we were unable to recover it. 00:25:51.695 [2024-11-15 11:07:40.408237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.695 [2024-11-15 11:07:40.408275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.695 [2024-11-15 11:07:40.408291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.695 [2024-11-15 11:07:40.408299] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.695 [2024-11-15 11:07:40.408305] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.695 [2024-11-15 11:07:40.417821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.695 qpair failed and we were unable to recover it. 00:25:51.695 [2024-11-15 11:07:40.428167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.695 [2024-11-15 11:07:40.428213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.695 [2024-11-15 11:07:40.428229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.695 [2024-11-15 11:07:40.428236] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.695 [2024-11-15 11:07:40.428243] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.695 [2024-11-15 11:07:40.437869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.695 qpair failed and we were unable to recover it. 00:25:51.695 [2024-11-15 11:07:40.448208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.695 [2024-11-15 11:07:40.448249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.695 [2024-11-15 11:07:40.448266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.695 [2024-11-15 11:07:40.448273] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.695 [2024-11-15 11:07:40.448280] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.695 [2024-11-15 11:07:40.457870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.695 qpair failed and we were unable to recover it. 00:25:51.695 [2024-11-15 11:07:40.468440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.695 [2024-11-15 11:07:40.468480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.695 [2024-11-15 11:07:40.468496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.695 [2024-11-15 11:07:40.468504] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.695 [2024-11-15 11:07:40.468511] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.695 [2024-11-15 11:07:40.477939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.695 qpair failed and we were unable to recover it. 00:25:51.695 [2024-11-15 11:07:40.488428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.695 [2024-11-15 11:07:40.488470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.695 [2024-11-15 11:07:40.488485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.695 [2024-11-15 11:07:40.488493] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.695 [2024-11-15 11:07:40.488500] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.695 [2024-11-15 11:07:40.498030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.695 qpair failed and we were unable to recover it. 00:25:51.696 [2024-11-15 11:07:40.508461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.696 [2024-11-15 11:07:40.508502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.696 [2024-11-15 11:07:40.508518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.696 [2024-11-15 11:07:40.508525] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.696 [2024-11-15 11:07:40.508532] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.696 [2024-11-15 11:07:40.518170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.696 qpair failed and we were unable to recover it. 00:25:51.696 [2024-11-15 11:07:40.528502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.696 [2024-11-15 11:07:40.528544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.696 [2024-11-15 11:07:40.528560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.696 [2024-11-15 11:07:40.528568] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.696 [2024-11-15 11:07:40.528574] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.696 [2024-11-15 11:07:40.538100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.696 qpair failed and we were unable to recover it. 00:25:51.696 [2024-11-15 11:07:40.548678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.696 [2024-11-15 11:07:40.548722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.696 [2024-11-15 11:07:40.548739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.696 [2024-11-15 11:07:40.548746] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.696 [2024-11-15 11:07:40.548753] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.696 [2024-11-15 11:07:40.558296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.696 qpair failed and we were unable to recover it. 00:25:51.696 [2024-11-15 11:07:40.568695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.696 [2024-11-15 11:07:40.568731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.696 [2024-11-15 11:07:40.568747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.696 [2024-11-15 11:07:40.568755] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.696 [2024-11-15 11:07:40.568762] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.696 [2024-11-15 11:07:40.578189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.696 qpair failed and we were unable to recover it. 00:25:51.953 [2024-11-15 11:07:40.588810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.953 [2024-11-15 11:07:40.588848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.953 [2024-11-15 11:07:40.588864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.953 [2024-11-15 11:07:40.588872] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.953 [2024-11-15 11:07:40.588879] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.953 [2024-11-15 11:07:40.598371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.953 qpair failed and we were unable to recover it. 00:25:51.953 [2024-11-15 11:07:40.608850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.953 [2024-11-15 11:07:40.608890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.953 [2024-11-15 11:07:40.608907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.953 [2024-11-15 11:07:40.608914] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.953 [2024-11-15 11:07:40.608921] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.953 [2024-11-15 11:07:40.618424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.953 qpair failed and we were unable to recover it. 00:25:51.953 [2024-11-15 11:07:40.628966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.953 [2024-11-15 11:07:40.629005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.953 [2024-11-15 11:07:40.629022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.953 [2024-11-15 11:07:40.629029] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.953 [2024-11-15 11:07:40.629036] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.953 [2024-11-15 11:07:40.638561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.953 qpair failed and we were unable to recover it. 00:25:51.954 [2024-11-15 11:07:40.648918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.954 [2024-11-15 11:07:40.648956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.954 [2024-11-15 11:07:40.648976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.954 [2024-11-15 11:07:40.648984] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.954 [2024-11-15 11:07:40.648990] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.954 [2024-11-15 11:07:40.658619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.954 qpair failed and we were unable to recover it. 00:25:51.954 [2024-11-15 11:07:40.668974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.954 [2024-11-15 11:07:40.669014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.954 [2024-11-15 11:07:40.669030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.954 [2024-11-15 11:07:40.669038] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.954 [2024-11-15 11:07:40.669044] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.954 [2024-11-15 11:07:40.678667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.954 qpair failed and we were unable to recover it. 00:25:51.954 [2024-11-15 11:07:40.689121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.954 [2024-11-15 11:07:40.689178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.954 [2024-11-15 11:07:40.689194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.954 [2024-11-15 11:07:40.689202] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.954 [2024-11-15 11:07:40.689209] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.954 [2024-11-15 11:07:40.698604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.954 qpair failed and we were unable to recover it. 00:25:51.954 [2024-11-15 11:07:40.709171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.954 [2024-11-15 11:07:40.709213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.954 [2024-11-15 11:07:40.709229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.954 [2024-11-15 11:07:40.709237] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.954 [2024-11-15 11:07:40.709243] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.954 [2024-11-15 11:07:40.718716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.954 qpair failed and we were unable to recover it. 00:25:51.954 [2024-11-15 11:07:40.729252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.954 [2024-11-15 11:07:40.729292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.954 [2024-11-15 11:07:40.729309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.954 [2024-11-15 11:07:40.729320] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.954 [2024-11-15 11:07:40.729327] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.954 [2024-11-15 11:07:40.738910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.954 qpair failed and we were unable to recover it. 00:25:51.954 [2024-11-15 11:07:40.749240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.954 [2024-11-15 11:07:40.749286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.954 [2024-11-15 11:07:40.749302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.954 [2024-11-15 11:07:40.749310] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.954 [2024-11-15 11:07:40.749316] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.954 [2024-11-15 11:07:40.758878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.954 qpair failed and we were unable to recover it. 00:25:51.954 [2024-11-15 11:07:40.769292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.954 [2024-11-15 11:07:40.769336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.954 [2024-11-15 11:07:40.769352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.954 [2024-11-15 11:07:40.769360] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.954 [2024-11-15 11:07:40.769366] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.954 [2024-11-15 11:07:40.778905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.954 qpair failed and we were unable to recover it. 00:25:51.954 [2024-11-15 11:07:40.789414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.954 [2024-11-15 11:07:40.789458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.954 [2024-11-15 11:07:40.789474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.954 [2024-11-15 11:07:40.789482] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.954 [2024-11-15 11:07:40.789489] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.954 [2024-11-15 11:07:40.799065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.954 qpair failed and we were unable to recover it. 00:25:51.954 [2024-11-15 11:07:40.809542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.954 [2024-11-15 11:07:40.809581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.954 [2024-11-15 11:07:40.809597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.954 [2024-11-15 11:07:40.809604] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.954 [2024-11-15 11:07:40.809611] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:51.954 [2024-11-15 11:07:40.819128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:51.954 qpair failed and we were unable to recover it. 00:25:51.954 [2024-11-15 11:07:40.829551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.954 [2024-11-15 11:07:40.829598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.954 [2024-11-15 11:07:40.829614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.954 [2024-11-15 11:07:40.829622] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.954 [2024-11-15 11:07:40.829629] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:52.211 [2024-11-15 11:07:40.839096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:52.211 qpair failed and we were unable to recover it. 00:25:52.211 [2024-11-15 11:07:40.849599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.211 [2024-11-15 11:07:40.849640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.211 [2024-11-15 11:07:40.849656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.211 [2024-11-15 11:07:40.849664] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.211 [2024-11-15 11:07:40.849671] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:52.211 [2024-11-15 11:07:40.859374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:52.212 qpair failed and we were unable to recover it. 00:25:52.212 [2024-11-15 11:07:40.869768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.212 [2024-11-15 11:07:40.869809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.212 [2024-11-15 11:07:40.869825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.212 [2024-11-15 11:07:40.869832] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.212 [2024-11-15 11:07:40.869839] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:52.212 [2024-11-15 11:07:40.879370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:52.212 qpair failed and we were unable to recover it. 00:25:52.212 [2024-11-15 11:07:40.889765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.212 [2024-11-15 11:07:40.889803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.212 [2024-11-15 11:07:40.889819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.212 [2024-11-15 11:07:40.889828] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.212 [2024-11-15 11:07:40.889834] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:52.212 [2024-11-15 11:07:40.899303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:52.212 qpair failed and we were unable to recover it. 00:25:52.212 [2024-11-15 11:07:40.909843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.212 [2024-11-15 11:07:40.909892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.212 [2024-11-15 11:07:40.909917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.212 [2024-11-15 11:07:40.909928] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.212 [2024-11-15 11:07:40.909938] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.212 [2024-11-15 11:07:40.919427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.212 qpair failed and we were unable to recover it. 00:25:52.212 [2024-11-15 11:07:40.929823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.212 [2024-11-15 11:07:40.929867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.212 [2024-11-15 11:07:40.929883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.212 [2024-11-15 11:07:40.929891] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.212 [2024-11-15 11:07:40.929898] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.212 [2024-11-15 11:07:40.939475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.212 qpair failed and we were unable to recover it. 00:25:52.212 [2024-11-15 11:07:40.949857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.212 [2024-11-15 11:07:40.949898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.212 [2024-11-15 11:07:40.949914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.212 [2024-11-15 11:07:40.949922] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.212 [2024-11-15 11:07:40.949928] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.212 [2024-11-15 11:07:40.959568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.212 qpair failed and we were unable to recover it. 00:25:52.212 [2024-11-15 11:07:40.970083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.212 [2024-11-15 11:07:40.970128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.212 [2024-11-15 11:07:40.970145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.212 [2024-11-15 11:07:40.970152] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.212 [2024-11-15 11:07:40.970159] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.212 [2024-11-15 11:07:40.979636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.212 qpair failed and we were unable to recover it. 00:25:52.212 [2024-11-15 11:07:40.990052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.212 [2024-11-15 11:07:40.990098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.212 [2024-11-15 11:07:40.990117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.212 [2024-11-15 11:07:40.990125] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.212 [2024-11-15 11:07:40.990132] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.212 [2024-11-15 11:07:40.999743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.212 qpair failed and we were unable to recover it. 00:25:52.212 [2024-11-15 11:07:41.010242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.212 [2024-11-15 11:07:41.010282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.212 [2024-11-15 11:07:41.010299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.212 [2024-11-15 11:07:41.010307] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.212 [2024-11-15 11:07:41.010314] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.212 [2024-11-15 11:07:41.019768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.212 qpair failed and we were unable to recover it. 00:25:52.212 [2024-11-15 11:07:41.030242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.212 [2024-11-15 11:07:41.030286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.212 [2024-11-15 11:07:41.030302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.212 [2024-11-15 11:07:41.030310] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.212 [2024-11-15 11:07:41.030317] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.212 [2024-11-15 11:07:41.039752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.212 qpair failed and we were unable to recover it. 00:25:52.212 [2024-11-15 11:07:41.050138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.212 [2024-11-15 11:07:41.050184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.212 [2024-11-15 11:07:41.050201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.212 [2024-11-15 11:07:41.050209] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.212 [2024-11-15 11:07:41.050215] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.212 [2024-11-15 11:07:41.059926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.212 qpair failed and we were unable to recover it. 00:25:52.212 [2024-11-15 11:07:41.070258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.212 [2024-11-15 11:07:41.070305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.212 [2024-11-15 11:07:41.070322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.212 [2024-11-15 11:07:41.070330] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.212 [2024-11-15 11:07:41.070340] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.212 [2024-11-15 11:07:41.079868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.213 qpair failed and we were unable to recover it. 00:25:52.213 [2024-11-15 11:07:41.090349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.213 [2024-11-15 11:07:41.090390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.213 [2024-11-15 11:07:41.090407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.213 [2024-11-15 11:07:41.090415] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.213 [2024-11-15 11:07:41.090422] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.469 [2024-11-15 11:07:41.099984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.469 qpair failed and we were unable to recover it. 00:25:52.469 [2024-11-15 11:07:41.110384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.469 [2024-11-15 11:07:41.110429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.469 [2024-11-15 11:07:41.110445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.469 [2024-11-15 11:07:41.110453] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.469 [2024-11-15 11:07:41.110459] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.469 [2024-11-15 11:07:41.120066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.469 qpair failed and we were unable to recover it. 00:25:52.469 [2024-11-15 11:07:41.130506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.470 [2024-11-15 11:07:41.130543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.470 [2024-11-15 11:07:41.130559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.470 [2024-11-15 11:07:41.130566] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.470 [2024-11-15 11:07:41.130573] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.470 [2024-11-15 11:07:41.140175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.470 qpair failed and we were unable to recover it. 00:25:52.470 [2024-11-15 11:07:41.150468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.470 [2024-11-15 11:07:41.150513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.470 [2024-11-15 11:07:41.150530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.470 [2024-11-15 11:07:41.150538] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.470 [2024-11-15 11:07:41.150544] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.470 [2024-11-15 11:07:41.160187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.470 qpair failed and we were unable to recover it. 00:25:52.470 [2024-11-15 11:07:41.170496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.470 [2024-11-15 11:07:41.170541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.470 [2024-11-15 11:07:41.170557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.470 [2024-11-15 11:07:41.170565] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.470 [2024-11-15 11:07:41.170571] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.470 [2024-11-15 11:07:41.180184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.470 qpair failed and we were unable to recover it. 00:25:52.470 [2024-11-15 11:07:41.190645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.470 [2024-11-15 11:07:41.190692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.470 [2024-11-15 11:07:41.190708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.470 [2024-11-15 11:07:41.190715] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.470 [2024-11-15 11:07:41.190722] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.470 [2024-11-15 11:07:41.200311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.470 qpair failed and we were unable to recover it. 00:25:52.470 [2024-11-15 11:07:41.210692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.470 [2024-11-15 11:07:41.210732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.470 [2024-11-15 11:07:41.210750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.470 [2024-11-15 11:07:41.210759] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.470 [2024-11-15 11:07:41.210766] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.470 [2024-11-15 11:07:41.220343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.470 qpair failed and we were unable to recover it. 00:25:52.470 [2024-11-15 11:07:41.230756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.470 [2024-11-15 11:07:41.230801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.470 [2024-11-15 11:07:41.230817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.470 [2024-11-15 11:07:41.230825] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.470 [2024-11-15 11:07:41.230831] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.470 [2024-11-15 11:07:41.240400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.470 qpair failed and we were unable to recover it. 00:25:52.470 [2024-11-15 11:07:41.250869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.470 [2024-11-15 11:07:41.250918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.470 [2024-11-15 11:07:41.250935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.470 [2024-11-15 11:07:41.250942] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.470 [2024-11-15 11:07:41.250948] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.470 [2024-11-15 11:07:41.260503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.470 qpair failed and we were unable to recover it. 00:25:52.470 [2024-11-15 11:07:41.270826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.470 [2024-11-15 11:07:41.270868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.470 [2024-11-15 11:07:41.270884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.470 [2024-11-15 11:07:41.270892] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.470 [2024-11-15 11:07:41.270899] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.470 [2024-11-15 11:07:41.280648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.470 qpair failed and we were unable to recover it. 00:25:52.470 [2024-11-15 11:07:41.291014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.470 [2024-11-15 11:07:41.291056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.470 [2024-11-15 11:07:41.291072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.470 [2024-11-15 11:07:41.291080] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.470 [2024-11-15 11:07:41.291086] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.470 [2024-11-15 11:07:41.300633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.470 qpair failed and we were unable to recover it. 00:25:52.470 [2024-11-15 11:07:41.311133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.470 [2024-11-15 11:07:41.311178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.470 [2024-11-15 11:07:41.311194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.470 [2024-11-15 11:07:41.311201] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.470 [2024-11-15 11:07:41.311208] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.470 [2024-11-15 11:07:41.320722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.470 qpair failed and we were unable to recover it. 00:25:52.470 [2024-11-15 11:07:41.331254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.470 [2024-11-15 11:07:41.331302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.470 [2024-11-15 11:07:41.331322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.470 [2024-11-15 11:07:41.331330] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.470 [2024-11-15 11:07:41.331336] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.470 [2024-11-15 11:07:41.340915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.470 qpair failed and we were unable to recover it. 00:25:52.470 [2024-11-15 11:07:41.351235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.470 [2024-11-15 11:07:41.351280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.470 [2024-11-15 11:07:41.351302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.470 [2024-11-15 11:07:41.351310] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.470 [2024-11-15 11:07:41.351317] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.728 [2024-11-15 11:07:41.360868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.728 qpair failed and we were unable to recover it. 00:25:52.728 [2024-11-15 11:07:41.371291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.728 [2024-11-15 11:07:41.371332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.728 [2024-11-15 11:07:41.371348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.728 [2024-11-15 11:07:41.371356] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.728 [2024-11-15 11:07:41.371362] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.728 [2024-11-15 11:07:41.380953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.728 qpair failed and we were unable to recover it. 00:25:52.728 [2024-11-15 11:07:41.391385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.728 [2024-11-15 11:07:41.391429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.728 [2024-11-15 11:07:41.391446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.728 [2024-11-15 11:07:41.391453] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.728 [2024-11-15 11:07:41.391460] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.728 [2024-11-15 11:07:41.400986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.728 qpair failed and we were unable to recover it. 00:25:52.728 [2024-11-15 11:07:41.411441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.728 [2024-11-15 11:07:41.411484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.728 [2024-11-15 11:07:41.411500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.728 [2024-11-15 11:07:41.411508] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.728 [2024-11-15 11:07:41.411518] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.728 [2024-11-15 11:07:41.421068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.728 qpair failed and we were unable to recover it. 00:25:52.728 [2024-11-15 11:07:41.431403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.728 [2024-11-15 11:07:41.431452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.728 [2024-11-15 11:07:41.431467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.728 [2024-11-15 11:07:41.431475] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.728 [2024-11-15 11:07:41.431481] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.728 [2024-11-15 11:07:41.441135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.728 qpair failed and we were unable to recover it. 00:25:52.728 [2024-11-15 11:07:41.451407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.728 [2024-11-15 11:07:41.451450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.728 [2024-11-15 11:07:41.451467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.728 [2024-11-15 11:07:41.451474] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.728 [2024-11-15 11:07:41.451481] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.728 [2024-11-15 11:07:41.461040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.728 qpair failed and we were unable to recover it. 00:25:52.728 [2024-11-15 11:07:41.471519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.728 [2024-11-15 11:07:41.471564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.728 [2024-11-15 11:07:41.471580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.728 [2024-11-15 11:07:41.471588] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.728 [2024-11-15 11:07:41.471594] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.728 [2024-11-15 11:07:41.481127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.728 qpair failed and we were unable to recover it. 00:25:52.728 [2024-11-15 11:07:41.491647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.728 [2024-11-15 11:07:41.491688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.728 [2024-11-15 11:07:41.491704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.728 [2024-11-15 11:07:41.491711] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.728 [2024-11-15 11:07:41.491718] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.728 [2024-11-15 11:07:41.501290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.728 qpair failed and we were unable to recover it. 00:25:52.728 [2024-11-15 11:07:41.511720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.728 [2024-11-15 11:07:41.511766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.728 [2024-11-15 11:07:41.511782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.728 [2024-11-15 11:07:41.511790] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.728 [2024-11-15 11:07:41.511796] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.728 [2024-11-15 11:07:41.521309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.728 qpair failed and we were unable to recover it. 00:25:52.728 [2024-11-15 11:07:41.531733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.728 [2024-11-15 11:07:41.531781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.728 [2024-11-15 11:07:41.531797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.728 [2024-11-15 11:07:41.531805] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.728 [2024-11-15 11:07:41.531811] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.728 [2024-11-15 11:07:41.541375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.728 qpair failed and we were unable to recover it. 00:25:52.728 [2024-11-15 11:07:41.551827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.728 [2024-11-15 11:07:41.551867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.728 [2024-11-15 11:07:41.551883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.728 [2024-11-15 11:07:41.551890] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.728 [2024-11-15 11:07:41.551897] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.728 [2024-11-15 11:07:41.561413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.728 qpair failed and we were unable to recover it. 00:25:52.728 [2024-11-15 11:07:41.571884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.728 [2024-11-15 11:07:41.571928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.728 [2024-11-15 11:07:41.571943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.728 [2024-11-15 11:07:41.571951] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.728 [2024-11-15 11:07:41.571958] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.729 [2024-11-15 11:07:41.581462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.729 qpair failed and we were unable to recover it. 00:25:52.729 [2024-11-15 11:07:41.592014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.729 [2024-11-15 11:07:41.592063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.729 [2024-11-15 11:07:41.592080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.729 [2024-11-15 11:07:41.592087] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.729 [2024-11-15 11:07:41.592094] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.729 [2024-11-15 11:07:41.601553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.729 qpair failed and we were unable to recover it. 00:25:52.729 [2024-11-15 11:07:41.611871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.729 [2024-11-15 11:07:41.611912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.729 [2024-11-15 11:07:41.611928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.729 [2024-11-15 11:07:41.611936] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.729 [2024-11-15 11:07:41.611943] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.986 [2024-11-15 11:07:41.621518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.986 qpair failed and we were unable to recover it. 00:25:52.986 [2024-11-15 11:07:41.632042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.986 [2024-11-15 11:07:41.632087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.986 [2024-11-15 11:07:41.632103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.986 [2024-11-15 11:07:41.632110] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.986 [2024-11-15 11:07:41.632117] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.986 [2024-11-15 11:07:41.641716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.986 qpair failed and we were unable to recover it. 00:25:52.986 [2024-11-15 11:07:41.652088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.986 [2024-11-15 11:07:41.652134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.986 [2024-11-15 11:07:41.652151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.986 [2024-11-15 11:07:41.652158] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.986 [2024-11-15 11:07:41.652177] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.986 [2024-11-15 11:07:41.661769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.986 qpair failed and we were unable to recover it. 00:25:52.986 [2024-11-15 11:07:41.672238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.986 [2024-11-15 11:07:41.672279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.986 [2024-11-15 11:07:41.672298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.986 [2024-11-15 11:07:41.672305] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.986 [2024-11-15 11:07:41.672312] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.986 [2024-11-15 11:07:41.681787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.986 qpair failed and we were unable to recover it. 00:25:52.986 [2024-11-15 11:07:41.692130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.986 [2024-11-15 11:07:41.692178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.986 [2024-11-15 11:07:41.692195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.986 [2024-11-15 11:07:41.692203] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.986 [2024-11-15 11:07:41.692210] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.986 [2024-11-15 11:07:41.701765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.986 qpair failed and we were unable to recover it. 00:25:52.986 [2024-11-15 11:07:41.712283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.986 [2024-11-15 11:07:41.712329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.986 [2024-11-15 11:07:41.712346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.986 [2024-11-15 11:07:41.712353] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.986 [2024-11-15 11:07:41.712360] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.986 [2024-11-15 11:07:41.721932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.986 qpair failed and we were unable to recover it. 00:25:52.986 [2024-11-15 11:07:41.732361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.986 [2024-11-15 11:07:41.732401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.986 [2024-11-15 11:07:41.732417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.986 [2024-11-15 11:07:41.732425] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.986 [2024-11-15 11:07:41.732432] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.986 [2024-11-15 11:07:41.741983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.986 qpair failed and we were unable to recover it. 00:25:52.986 [2024-11-15 11:07:41.752460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.986 [2024-11-15 11:07:41.752500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.986 [2024-11-15 11:07:41.752516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.986 [2024-11-15 11:07:41.752523] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.986 [2024-11-15 11:07:41.752533] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.986 [2024-11-15 11:07:41.762166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.986 qpair failed and we were unable to recover it. 00:25:52.986 [2024-11-15 11:07:41.772535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.986 [2024-11-15 11:07:41.772572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.986 [2024-11-15 11:07:41.772588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.986 [2024-11-15 11:07:41.772596] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.986 [2024-11-15 11:07:41.772602] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.986 [2024-11-15 11:07:41.782161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.986 qpair failed and we were unable to recover it. 00:25:52.986 [2024-11-15 11:07:41.792620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.986 [2024-11-15 11:07:41.792659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.986 [2024-11-15 11:07:41.792675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.986 [2024-11-15 11:07:41.792682] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.986 [2024-11-15 11:07:41.792689] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.986 [2024-11-15 11:07:41.802185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.986 qpair failed and we were unable to recover it. 00:25:52.986 [2024-11-15 11:07:41.812679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.986 [2024-11-15 11:07:41.812723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.986 [2024-11-15 11:07:41.812739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.986 [2024-11-15 11:07:41.812746] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.986 [2024-11-15 11:07:41.812754] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.986 [2024-11-15 11:07:41.822215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.986 qpair failed and we were unable to recover it. 00:25:52.986 [2024-11-15 11:07:41.832713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.986 [2024-11-15 11:07:41.832755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.986 [2024-11-15 11:07:41.832770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.986 [2024-11-15 11:07:41.832778] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.986 [2024-11-15 11:07:41.832785] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.986 [2024-11-15 11:07:41.842348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.986 qpair failed and we were unable to recover it. 00:25:52.986 [2024-11-15 11:07:41.852758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.986 [2024-11-15 11:07:41.852796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.986 [2024-11-15 11:07:41.852812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.986 [2024-11-15 11:07:41.852819] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.986 [2024-11-15 11:07:41.852826] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:52.986 [2024-11-15 11:07:41.862328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.986 qpair failed and we were unable to recover it. 00:25:53.243 [2024-11-15 11:07:41.872858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.243 [2024-11-15 11:07:41.872901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.243 [2024-11-15 11:07:41.872917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.243 [2024-11-15 11:07:41.872925] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.243 [2024-11-15 11:07:41.872931] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:53.243 [2024-11-15 11:07:41.882590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.243 qpair failed and we were unable to recover it. 00:25:53.243 [2024-11-15 11:07:41.892944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.243 [2024-11-15 11:07:41.892987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.243 [2024-11-15 11:07:41.893003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.243 [2024-11-15 11:07:41.893011] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.243 [2024-11-15 11:07:41.893017] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:53.243 [2024-11-15 11:07:41.902509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.243 qpair failed and we were unable to recover it. 00:25:54.174 Read completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Read completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Write completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Write completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Write completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Read completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Read completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Write completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Write completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Read completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Write completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Read completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Write completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Write completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Write completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Read completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Write completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Write completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Read completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Write completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Write completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Write completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Read completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Write completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Write completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Write completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Write completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Write completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Read completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Read completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Read completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 Write completed with error (sct=0, sc=8) 00:25:54.174 starting I/O failed 00:25:54.174 [2024-11-15 11:07:42.907086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:54.174 [2024-11-15 11:07:42.915798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.174 [2024-11-15 11:07:42.915839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.174 [2024-11-15 11:07:42.915858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.174 [2024-11-15 11:07:42.915866] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.174 [2024-11-15 11:07:42.915872] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:25:54.175 [2024-11-15 11:07:42.925419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:54.175 qpair failed and we were unable to recover it. 00:25:54.175 [2024-11-15 11:07:42.935714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.175 [2024-11-15 11:07:42.935753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.175 [2024-11-15 11:07:42.935769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.175 [2024-11-15 11:07:42.935777] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.175 [2024-11-15 11:07:42.935784] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:25:54.175 [2024-11-15 11:07:42.945520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:54.175 qpair failed and we were unable to recover it. 00:25:54.175 [2024-11-15 11:07:42.955862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.175 [2024-11-15 11:07:42.955909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.175 [2024-11-15 11:07:42.955928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.175 [2024-11-15 11:07:42.955936] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.175 [2024-11-15 11:07:42.955943] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:25:54.175 [2024-11-15 11:07:42.965524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.175 qpair failed and we were unable to recover it. 00:25:54.175 [2024-11-15 11:07:42.975954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.175 [2024-11-15 11:07:42.975997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.175 [2024-11-15 11:07:42.976017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.175 [2024-11-15 11:07:42.976024] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.175 [2024-11-15 11:07:42.976031] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:25:54.175 [2024-11-15 11:07:42.985588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.175 qpair failed and we were unable to recover it. 00:25:54.175 [2024-11-15 11:07:42.985713] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:25:54.175 A controller has encountered a failure and is being reset. 00:25:54.175 [2024-11-15 11:07:42.996122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.175 [2024-11-15 11:07:42.996179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.175 [2024-11-15 11:07:42.996205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.175 [2024-11-15 11:07:42.996217] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.175 [2024-11-15 11:07:42.996226] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:54.175 [2024-11-15 11:07:43.005662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:54.175 qpair failed and we were unable to recover it. 00:25:54.175 [2024-11-15 11:07:43.016029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.175 [2024-11-15 11:07:43.016074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.175 [2024-11-15 11:07:43.016092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.175 [2024-11-15 11:07:43.016099] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.175 [2024-11-15 11:07:43.016106] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:25:54.175 [2024-11-15 11:07:43.025631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:54.175 qpair failed and we were unable to recover it. 00:25:54.175 [2024-11-15 11:07:43.025756] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:54.175 [2024-11-15 11:07:43.058088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:25:54.431 Controller properly reset. 00:25:54.431 Initializing NVMe Controllers 00:25:54.431 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:54.431 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:54.431 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:54.431 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:54.431 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:54.431 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:54.431 Initialization complete. Launching workers. 00:25:54.431 Starting thread on core 1 00:25:54.431 Starting thread on core 2 00:25:54.431 Starting thread on core 3 00:25:54.431 Starting thread on core 0 00:25:54.431 11:07:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:54.431 00:25:54.431 real 0m11.974s 00:25:54.431 user 0m24.465s 00:25:54.431 sys 0m2.272s 00:25:54.431 11:07:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:54.431 11:07:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:54.431 ************************************ 00:25:54.431 END TEST nvmf_target_disconnect_tc2 00:25:54.431 ************************************ 00:25:54.431 11:07:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:25:54.431 11:07:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:25:54.431 11:07:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:54.431 11:07:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:54.431 11:07:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:54.431 ************************************ 00:25:54.431 START TEST nvmf_target_disconnect_tc3 00:25:54.431 ************************************ 00:25:54.431 11:07:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc3 00:25:54.431 11:07:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=1567932 00:25:54.431 11:07:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:25:54.431 11:07:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:25:56.326 11:07:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 1566580 00:25:56.326 11:07:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:25:57.699 Write completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Write completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Write completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Write completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Read completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Read completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Write completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Read completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Read completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Read completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Read completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Read completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Write completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Read completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Read completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Read completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Write completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Read completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Read completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Write completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Write completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Read completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Read completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Write completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Write completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Write completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Write completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Read completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Read completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Read completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.699 Read completed with error (sct=0, sc=8) 00:25:57.699 starting I/O failed 00:25:57.700 Write completed with error (sct=0, sc=8) 00:25:57.700 starting I/O failed 00:25:57.700 [2024-11-15 11:07:46.352514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.632 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 1566580 Killed "${NVMF_APP[@]}" "$@" 00:25:58.632 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:25:58.632 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:58.632 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:58.632 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:58.632 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:58.632 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1568487 00:25:58.632 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1568487 00:25:58.632 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:58.632 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@833 -- # '[' -z 1568487 ']' 00:25:58.632 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.632 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:58.632 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.632 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:58.632 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:58.632 [2024-11-15 11:07:47.246675] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:25:58.632 [2024-11-15 11:07:47.246725] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.632 [2024-11-15 11:07:47.326399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:58.632 Write completed with error (sct=0, sc=8) 00:25:58.632 starting I/O failed 00:25:58.632 Write completed with error (sct=0, sc=8) 00:25:58.632 starting I/O failed 00:25:58.632 Read completed with error (sct=0, sc=8) 00:25:58.632 starting I/O failed 00:25:58.632 Write completed with error (sct=0, sc=8) 00:25:58.632 starting I/O failed 00:25:58.632 Read completed with error (sct=0, sc=8) 00:25:58.632 starting I/O failed 00:25:58.632 Read completed with error (sct=0, sc=8) 00:25:58.632 starting I/O failed 00:25:58.632 Write completed with error (sct=0, sc=8) 00:25:58.632 starting I/O failed 00:25:58.633 Read completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Read completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Read completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Write completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Write completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Read completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Write completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Write completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Write completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Read completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Read completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Read completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Read completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Read completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Read completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Write completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Write completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Write completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Write completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Read completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Read completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Read completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Write completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Write completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 Read completed with error (sct=0, sc=8) 00:25:58.633 starting I/O failed 00:25:58.633 [2024-11-15 11:07:47.357079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:25:58.633 [2024-11-15 11:07:47.358811] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:58.633 [2024-11-15 11:07:47.358834] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:58.633 [2024-11-15 11:07:47.358841] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:25:58.633 [2024-11-15 11:07:47.366856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.633 [2024-11-15 11:07:47.366889] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.633 [2024-11-15 11:07:47.366897] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.633 [2024-11-15 11:07:47.366905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.633 [2024-11-15 11:07:47.366912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.633 [2024-11-15 11:07:47.368527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:58.633 [2024-11-15 11:07:47.368639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:58.633 [2024-11-15 11:07:47.368745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:58.633 [2024-11-15 11:07:47.368746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:25:58.633 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:58.633 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@866 -- # return 0 00:25:58.633 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:58.633 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:58.633 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:58.633 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.633 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:58.633 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.633 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:58.890 Malloc0 00:25:58.890 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.890 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:25:58.890 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.890 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:58.890 [2024-11-15 11:07:47.565445] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x192f060/0x193afc0) succeed. 00:25:58.890 [2024-11-15 11:07:47.574971] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19306f0/0x197c660) succeed. 00:25:58.890 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.890 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:58.890 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.890 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:58.890 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.890 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:58.890 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.890 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:58.890 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.890 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:25:58.890 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.890 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:58.890 [2024-11-15 11:07:47.716226] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:25:58.890 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.890 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:25:58.890 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.890 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:58.890 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.890 11:07:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 1567932 00:25:59.821 [2024-11-15 11:07:48.362845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:25:59.822 qpair failed and we were unable to recover it. 00:25:59.822 [2024-11-15 11:07:48.364533] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:59.822 [2024-11-15 11:07:48.364553] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:59.822 [2024-11-15 11:07:48.364560] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:26:00.752 [2024-11-15 11:07:49.368391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:26:00.752 qpair failed and we were unable to recover it. 00:26:00.752 [2024-11-15 11:07:49.369922] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:00.752 [2024-11-15 11:07:49.369945] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:00.752 [2024-11-15 11:07:49.369952] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:26:01.813 [2024-11-15 11:07:50.373619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:26:01.813 qpair failed and we were unable to recover it. 00:26:01.813 [2024-11-15 11:07:50.375225] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:01.813 [2024-11-15 11:07:50.375243] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:01.813 [2024-11-15 11:07:50.375250] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:26:02.744 [2024-11-15 11:07:51.378882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:26:02.744 qpair failed and we were unable to recover it. 00:26:02.744 [2024-11-15 11:07:51.380479] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:02.744 [2024-11-15 11:07:51.380498] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:02.744 [2024-11-15 11:07:51.380504] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:26:03.676 [2024-11-15 11:07:52.384114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:26:03.676 qpair failed and we were unable to recover it. 00:26:03.676 [2024-11-15 11:07:52.385532] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:03.676 [2024-11-15 11:07:52.385549] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:03.676 [2024-11-15 11:07:52.385555] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:26:04.606 [2024-11-15 11:07:53.389314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:26:04.606 qpair failed and we were unable to recover it. 00:26:04.606 [2024-11-15 11:07:53.390746] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:04.606 [2024-11-15 11:07:53.390763] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:04.606 [2024-11-15 11:07:53.390769] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:26:05.536 [2024-11-15 11:07:54.394497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:26:05.536 qpair failed and we were unable to recover it. 00:26:06.904 Read completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Read completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Read completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Write completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Read completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Write completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Write completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Write completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Write completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Read completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Write completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Write completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Write completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Read completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Write completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Read completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Write completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Write completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Read completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Write completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Read completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Read completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Write completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Read completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Write completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Write completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Write completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Read completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Read completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Read completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Write completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 Read completed with error (sct=0, sc=8) 00:26:06.904 starting I/O failed 00:26:06.904 [2024-11-15 11:07:55.399072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:26:07.837 Write completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Write completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Write completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Read completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Read completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Read completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Read completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Write completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Write completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Read completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Read completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Read completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Read completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Read completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Write completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Write completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Read completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Read completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Read completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Write completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Read completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Read completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Read completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Write completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Read completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Read completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Read completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Read completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Write completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Write completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Read completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 Read completed with error (sct=0, sc=8) 00:26:07.837 starting I/O failed 00:26:07.837 [2024-11-15 11:07:56.403558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:26:07.837 [2024-11-15 11:07:56.405003] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:07.837 [2024-11-15 11:07:56.405022] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:07.837 [2024-11-15 11:07:56.405029] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:08.768 [2024-11-15 11:07:57.408742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:26:08.768 qpair failed and we were unable to recover it. 00:26:08.768 [2024-11-15 11:07:57.410267] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:08.768 [2024-11-15 11:07:57.410289] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:08.768 [2024-11-15 11:07:57.410296] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:09.697 [2024-11-15 11:07:58.413974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-11-15 11:07:58.415639] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:09.697 [2024-11-15 11:07:58.415665] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:09.697 [2024-11-15 11:07:58.415676] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:26:10.630 [2024-11-15 11:07:59.419569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:26:10.630 qpair failed and we were unable to recover it. 00:26:10.630 [2024-11-15 11:07:59.421046] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:10.630 [2024-11-15 11:07:59.421063] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:10.630 [2024-11-15 11:07:59.421069] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:26:11.563 [2024-11-15 11:08:00.424896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:26:11.563 qpair failed and we were unable to recover it. 00:26:11.563 [2024-11-15 11:08:00.426709] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:11.563 [2024-11-15 11:08:00.426731] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:11.563 [2024-11-15 11:08:00.426739] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:12.935 [2024-11-15 11:08:01.430563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:26:12.935 qpair failed and we were unable to recover it. 00:26:12.935 [2024-11-15 11:08:01.432043] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:12.935 [2024-11-15 11:08:01.432060] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:12.935 [2024-11-15 11:08:01.432067] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:13.871 [2024-11-15 11:08:02.435909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:26:13.871 qpair failed and we were unable to recover it. 00:26:13.871 [2024-11-15 11:08:02.436046] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed 00:26:13.871 A controller has encountered a failure and is being reset. 00:26:13.871 Resorting to new failover address 192.168.100.9 00:26:13.871 [2024-11-15 11:08:02.436142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.871 [2024-11-15 11:08:02.436224] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:13.871 [2024-11-15 11:08:02.465594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:26:13.871 Controller properly reset. 00:26:13.871 Initializing NVMe Controllers 00:26:13.871 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:13.871 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:13.871 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:13.871 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:13.871 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:13.871 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:13.871 Initialization complete. Launching workers. 00:26:13.871 Starting thread on core 1 00:26:13.871 Starting thread on core 2 00:26:13.871 Starting thread on core 3 00:26:13.871 Starting thread on core 0 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:26:13.871 00:26:13.871 real 0m19.341s 00:26:13.871 user 1m1.497s 00:26:13.871 sys 0m4.640s 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:13.871 ************************************ 00:26:13.871 END TEST nvmf_target_disconnect_tc3 00:26:13.871 ************************************ 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:13.871 rmmod nvme_rdma 00:26:13.871 rmmod nvme_fabrics 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1568487 ']' 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1568487 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 1568487 ']' 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 1568487 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1568487 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1568487' 00:26:13.871 killing process with pid 1568487 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 1568487 00:26:13.871 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 1568487 00:26:14.130 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:14.130 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:14.130 00:26:14.130 real 0m38.621s 00:26:14.130 user 2m30.371s 00:26:14.130 sys 0m11.585s 00:26:14.130 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:14.130 11:08:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:14.130 ************************************ 00:26:14.130 END TEST nvmf_target_disconnect 00:26:14.130 ************************************ 00:26:14.130 11:08:02 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:14.130 00:26:14.130 real 4m59.883s 00:26:14.130 user 12m33.955s 00:26:14.130 sys 1m20.088s 00:26:14.130 11:08:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:14.130 11:08:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.130 ************************************ 00:26:14.130 END TEST nvmf_host 00:26:14.130 ************************************ 00:26:14.130 11:08:02 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:26:14.130 00:26:14.130 real 19m10.620s 00:26:14.130 user 49m40.465s 00:26:14.130 sys 4m30.344s 00:26:14.130 11:08:02 nvmf_rdma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:14.130 11:08:02 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:14.130 ************************************ 00:26:14.130 END TEST nvmf_rdma 00:26:14.130 ************************************ 00:26:14.389 11:08:03 -- spdk/autotest.sh@278 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:26:14.389 11:08:03 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:14.389 11:08:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:14.389 11:08:03 -- common/autotest_common.sh@10 -- # set +x 00:26:14.389 ************************************ 00:26:14.390 START TEST spdkcli_nvmf_rdma 00:26:14.390 ************************************ 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:26:14.390 * Looking for test storage... 00:26:14.390 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@1691 -- # lcov --version 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:14.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.390 --rc genhtml_branch_coverage=1 00:26:14.390 --rc genhtml_function_coverage=1 00:26:14.390 --rc genhtml_legend=1 00:26:14.390 --rc geninfo_all_blocks=1 00:26:14.390 --rc geninfo_unexecuted_blocks=1 00:26:14.390 00:26:14.390 ' 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:14.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.390 --rc genhtml_branch_coverage=1 00:26:14.390 --rc genhtml_function_coverage=1 00:26:14.390 --rc genhtml_legend=1 00:26:14.390 --rc geninfo_all_blocks=1 00:26:14.390 --rc geninfo_unexecuted_blocks=1 00:26:14.390 00:26:14.390 ' 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:14.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.390 --rc genhtml_branch_coverage=1 00:26:14.390 --rc genhtml_function_coverage=1 00:26:14.390 --rc genhtml_legend=1 00:26:14.390 --rc geninfo_all_blocks=1 00:26:14.390 --rc geninfo_unexecuted_blocks=1 00:26:14.390 00:26:14.390 ' 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:14.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.390 --rc genhtml_branch_coverage=1 00:26:14.390 --rc genhtml_function_coverage=1 00:26:14.390 --rc genhtml_legend=1 00:26:14.390 --rc geninfo_all_blocks=1 00:26:14.390 --rc geninfo_unexecuted_blocks=1 00:26:14.390 00:26:14.390 ' 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:14.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1571319 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 1571319 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@833 -- # '[' -z 1571319 ']' 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:14.390 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:14.650 [2024-11-15 11:08:03.288032] Starting SPDK v25.01-pre git sha1 30279d1cf / DPDK 24.03.0 initialization... 00:26:14.650 [2024-11-15 11:08:03.288084] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1571319 ] 00:26:14.650 [2024-11-15 11:08:03.349129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:14.650 [2024-11-15 11:08:03.393057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.650 [2024-11-15 11:08:03.393062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.650 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:14.650 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@866 -- # return 0 00:26:14.650 11:08:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:14.650 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:14.650 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:14.650 11:08:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:14.650 11:08:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:26:14.650 11:08:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:26:14.650 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:14.650 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:14.650 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:14.650 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:14.650 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:14.650 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.650 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:14.650 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.650 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:14.650 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:14.650 11:08:03 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:26:14.650 11:08:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:21.218 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:21.218 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:26:21.218 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:21.218 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:21.218 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:21.218 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:21.218 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:21.218 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:26:21.218 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:26:21.219 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:26:21.219 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:26:21.219 Found net devices under 0000:af:00.0: mlx_0_0 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:26:21.219 Found net devices under 0000:af:00.1: mlx_0_1 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # is_hw=yes 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # rdma_device_init 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:21.219 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:21.219 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:26:21.219 altname enp175s0f0np0 00:26:21.219 altname ens801f0np0 00:26:21.219 inet 192.168.100.8/24 scope global mlx_0_0 00:26:21.219 valid_lft forever preferred_lft forever 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:21.219 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:21.219 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:26:21.219 altname enp175s0f1np1 00:26:21.219 altname ens801f1np1 00:26:21.219 inet 192.168.100.9/24 scope global mlx_0_1 00:26:21.219 valid_lft forever preferred_lft forever 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # return 0 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:21.219 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:21.220 192.168.100.9' 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:21.220 192.168.100.9' 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # head -n 1 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:21.220 192.168.100.9' 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # tail -n +2 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # head -n 1 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:21.220 11:08:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:21.220 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:21.220 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:21.220 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:21.220 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:21.220 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:21.220 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:21.220 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:21.220 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:21.220 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:21.220 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:26:21.220 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:21.220 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:21.220 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:26:21.220 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:21.220 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:21.220 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:26:21.220 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:26:21.220 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:21.220 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:21.220 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:21.220 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:21.220 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:26:21.220 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:26:21.220 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:21.220 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:21.220 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:21.220 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:21.220 ' 00:26:23.124 [2024-11-15 11:08:11.736718] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1091560/0x109ed80) succeed. 00:26:23.124 [2024-11-15 11:08:11.746392] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1092c40/0x111edc0) succeed. 00:26:24.501 [2024-11-15 11:08:13.025080] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:26:26.407 [2024-11-15 11:08:15.272342] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:26:28.942 [2024-11-15 11:08:17.202762] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:26:29.878 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:29.878 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:29.878 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:29.878 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:29.878 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:29.878 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:29.878 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:29.878 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:29.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:29.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:29.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:26:29.878 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:29.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:29.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:26:29.878 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:29.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:29.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:26:29.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:26:29.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:29.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:29.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:29.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:29.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:26:29.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:26:29.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:29.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:29.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:29.878 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:30.137 11:08:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:30.137 11:08:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:30.137 11:08:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:30.137 11:08:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:30.137 11:08:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:30.137 11:08:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:30.137 11:08:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:26:30.137 11:08:18 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:26:30.395 11:08:19 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:30.396 11:08:19 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:30.396 11:08:19 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:30.396 11:08:19 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:30.396 11:08:19 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:30.654 11:08:19 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:30.654 11:08:19 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:30.654 11:08:19 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:30.654 11:08:19 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:30.654 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:30.654 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:30.654 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:30.654 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:26:30.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:26:30.655 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:30.655 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:30.655 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:30.655 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:30.655 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:30.655 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:30.655 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:30.655 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:30.655 ' 00:26:35.929 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:35.929 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:35.929 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:35.929 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:35.929 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:26:35.929 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:26:35.929 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:35.929 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:35.929 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:35.929 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:35.929 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:35.929 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:35.929 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:35.929 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 1571319 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@952 -- # '[' -z 1571319 ']' 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # kill -0 1571319 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@957 -- # uname 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1571319 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1571319' 00:26:35.929 killing process with pid 1571319 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@971 -- # kill 1571319 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@976 -- # wait 1571319 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:35.929 rmmod nvme_rdma 00:26:35.929 rmmod nvme_fabrics 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:35.929 00:26:35.929 real 0m21.746s 00:26:35.929 user 0m46.254s 00:26:35.929 sys 0m4.968s 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:35.929 11:08:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:35.929 ************************************ 00:26:35.929 END TEST spdkcli_nvmf_rdma 00:26:35.929 ************************************ 00:26:36.188 11:08:24 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:26:36.188 11:08:24 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:36.188 11:08:24 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:26:36.188 11:08:24 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:26:36.188 11:08:24 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:26:36.188 11:08:24 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:26:36.188 11:08:24 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:26:36.188 11:08:24 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:36.188 11:08:24 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:36.188 11:08:24 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:26:36.188 11:08:24 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:26:36.188 11:08:24 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:26:36.188 11:08:24 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:26:36.188 11:08:24 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:26:36.188 11:08:24 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:26:36.188 11:08:24 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:26:36.188 11:08:24 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:26:36.188 11:08:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:36.188 11:08:24 -- common/autotest_common.sh@10 -- # set +x 00:26:36.188 11:08:24 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:26:36.188 11:08:24 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:26:36.188 11:08:24 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:26:36.188 11:08:24 -- common/autotest_common.sh@10 -- # set +x 00:26:40.380 INFO: APP EXITING 00:26:40.380 INFO: killing all VMs 00:26:40.380 INFO: killing vhost app 00:26:40.380 INFO: EXIT DONE 00:26:42.917 Waiting for block devices as requested 00:26:42.917 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:42.917 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:42.917 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:42.917 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:42.917 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:42.917 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:43.176 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:43.176 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:43.176 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:43.176 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:43.435 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:43.436 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:43.436 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:43.694 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:43.695 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:43.695 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:43.695 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:46.983 Cleaning 00:26:46.983 Removing: /var/run/dpdk/spdk0/config 00:26:46.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:46.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:46.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:46.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:46.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:26:46.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:26:46.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:26:46.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:26:46.983 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:46.983 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:46.983 Removing: /var/run/dpdk/spdk1/config 00:26:46.983 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:46.983 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:46.984 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:46.984 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:46.984 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:26:46.984 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:26:46.984 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:26:46.984 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:26:46.984 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:46.984 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:46.984 Removing: /var/run/dpdk/spdk1/mp_socket 00:26:46.984 Removing: /var/run/dpdk/spdk2/config 00:26:46.984 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:46.984 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:46.984 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:46.984 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:46.984 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:26:46.984 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:26:46.984 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:26:46.984 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:26:46.984 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:46.984 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:46.984 Removing: /var/run/dpdk/spdk3/config 00:26:46.984 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:46.984 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:46.984 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:46.984 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:46.984 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:26:46.984 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:26:46.984 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:26:46.984 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:26:46.984 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:46.984 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:46.984 Removing: /var/run/dpdk/spdk4/config 00:26:46.984 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:46.984 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:46.984 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:46.984 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:46.984 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:26:46.984 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:26:46.984 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:26:46.984 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:26:46.984 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:46.984 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:46.984 Removing: /dev/shm/bdevperf_trace.pid1310167 00:26:46.984 Removing: /dev/shm/bdev_svc_trace.1 00:26:46.984 Removing: /dev/shm/nvmf_trace.0 00:26:46.984 Removing: /dev/shm/spdk_tgt_trace.pid1268587 00:26:46.984 Removing: /var/run/dpdk/spdk0 00:26:46.984 Removing: /var/run/dpdk/spdk1 00:26:46.984 Removing: /var/run/dpdk/spdk2 00:26:46.984 Removing: /var/run/dpdk/spdk3 00:26:46.984 Removing: /var/run/dpdk/spdk4 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1266417 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1267490 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1268587 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1269159 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1270053 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1270204 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1271199 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1271253 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1271567 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1276353 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1277778 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1278073 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1278365 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1278754 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1279262 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1279563 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1279722 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1280016 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1280723 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1283729 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1283987 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1284251 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1284265 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1284751 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1284856 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1285280 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1285485 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1285749 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1285764 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1286023 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1286034 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1286597 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1286855 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1287151 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1290777 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1294742 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1304888 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1305601 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1310167 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1310412 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1314444 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1320165 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1322897 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1332771 00:26:46.984 Removing: /var/run/dpdk/spdk_pid1357849 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1361454 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1410336 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1415363 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1421196 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1430186 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1475881 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1476740 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1477839 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1478927 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1483253 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1493349 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1499917 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1500844 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1501669 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1502520 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1502949 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1507225 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1507227 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1511542 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1512012 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1512702 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1513397 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1513405 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1517785 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1518297 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1522447 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1525014 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1530315 00:26:47.243 Removing: /var/run/dpdk/spdk_pid1540109 00:26:47.244 Removing: /var/run/dpdk/spdk_pid1540127 00:26:47.244 Removing: /var/run/dpdk/spdk_pid1559623 00:26:47.244 Removing: /var/run/dpdk/spdk_pid1559854 00:26:47.244 Removing: /var/run/dpdk/spdk_pid1565588 00:26:47.244 Removing: /var/run/dpdk/spdk_pid1566029 00:26:47.244 Removing: /var/run/dpdk/spdk_pid1567932 00:26:47.244 Removing: /var/run/dpdk/spdk_pid1571319 00:26:47.244 Clean 00:26:47.244 11:08:36 -- common/autotest_common.sh@1451 -- # return 0 00:26:47.244 11:08:36 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:26:47.244 11:08:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:47.244 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:26:47.244 11:08:36 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:26:47.244 11:08:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:47.244 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:26:47.502 11:08:36 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:26:47.502 11:08:36 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:26:47.502 11:08:36 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:26:47.502 11:08:36 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:26:47.502 11:08:36 -- spdk/autotest.sh@394 -- # hostname 00:26:47.502 11:08:36 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-09 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:26:47.502 geninfo: WARNING: invalid characters removed from testname! 00:27:09.428 11:08:55 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:27:09.429 11:08:58 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:27:11.332 11:09:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:27:13.235 11:09:01 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:27:15.140 11:09:03 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:27:17.181 11:09:05 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:27:18.560 11:09:07 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:18.560 11:09:07 -- spdk/autorun.sh@1 -- $ timing_finish 00:27:18.560 11:09:07 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt ]] 00:27:18.560 11:09:07 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:18.560 11:09:07 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:27:18.560 11:09:07 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:27:18.560 + [[ -n 1190633 ]] 00:27:18.560 + sudo kill 1190633 00:27:18.570 [Pipeline] } 00:27:18.585 [Pipeline] // stage 00:27:18.590 [Pipeline] } 00:27:18.605 [Pipeline] // timeout 00:27:18.610 [Pipeline] } 00:27:18.624 [Pipeline] // catchError 00:27:18.629 [Pipeline] } 00:27:18.643 [Pipeline] // wrap 00:27:18.650 [Pipeline] } 00:27:18.663 [Pipeline] // catchError 00:27:18.672 [Pipeline] stage 00:27:18.675 [Pipeline] { (Epilogue) 00:27:18.689 [Pipeline] catchError 00:27:18.691 [Pipeline] { 00:27:18.704 [Pipeline] echo 00:27:18.707 Cleanup processes 00:27:18.713 [Pipeline] sh 00:27:19.000 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:27:19.000 1585709 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:27:19.015 [Pipeline] sh 00:27:19.299 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:27:19.299 ++ grep -v 'sudo pgrep' 00:27:19.299 ++ awk '{print $1}' 00:27:19.299 + sudo kill -9 00:27:19.299 + true 00:27:19.311 [Pipeline] sh 00:27:19.596 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:29.585 [Pipeline] sh 00:27:29.871 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:29.871 Artifacts sizes are good 00:27:29.885 [Pipeline] archiveArtifacts 00:27:29.892 Archiving artifacts 00:27:30.008 [Pipeline] sh 00:27:30.297 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:27:30.313 [Pipeline] cleanWs 00:27:30.324 [WS-CLEANUP] Deleting project workspace... 00:27:30.324 [WS-CLEANUP] Deferred wipeout is used... 00:27:30.330 [WS-CLEANUP] done 00:27:30.332 [Pipeline] } 00:27:30.350 [Pipeline] // catchError 00:27:30.362 [Pipeline] sh 00:27:30.643 + logger -p user.info -t JENKINS-CI 00:27:30.653 [Pipeline] } 00:27:30.667 [Pipeline] // stage 00:27:30.673 [Pipeline] } 00:27:30.688 [Pipeline] // node 00:27:30.694 [Pipeline] End of Pipeline 00:27:30.734 Finished: SUCCESS